2023-07-18 12:14:32,074 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d 2023-07-18 12:14:32,094 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-18 12:14:32,117 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 12:14:32,118 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299, deleteOnExit=true 2023-07-18 12:14:32,118 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 12:14:32,119 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/test.cache.data in system properties and HBase conf 2023-07-18 12:14:32,119 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 12:14:32,120 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir in system properties and HBase conf 2023-07-18 12:14:32,120 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 12:14:32,121 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 12:14:32,121 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 12:14:32,236 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-18 12:14:32,666 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 12:14:32,672 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 12:14:32,673 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 12:14:32,673 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 12:14:32,674 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 12:14:32,674 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 12:14:32,675 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 12:14:32,675 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 12:14:32,675 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 12:14:32,676 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 12:14:32,676 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/nfs.dump.dir in system properties and HBase conf 2023-07-18 12:14:32,677 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir in system properties and HBase conf 2023-07-18 12:14:32,677 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 12:14:32,678 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 12:14:32,678 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 12:14:33,265 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 12:14:33,269 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 12:14:33,583 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 12:14:33,772 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-18 12:14:33,798 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:14:33,840 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:14:33,881 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/Jetty_localhost_39115_hdfs____s94fix/webapp 2023-07-18 12:14:34,020 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39115 2023-07-18 12:14:34,030 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 12:14:34,031 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 12:14:34,482 WARN [Listener at localhost/46497] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:14:34,576 WARN [Listener at localhost/46497] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:14:34,596 WARN [Listener at localhost/46497] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:14:34,603 INFO [Listener at localhost/46497] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:14:34,608 INFO [Listener at localhost/46497] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/Jetty_localhost_41469_datanode____al5nb5/webapp 2023-07-18 12:14:34,771 INFO [Listener at localhost/46497] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41469 2023-07-18 12:14:35,200 WARN [Listener at localhost/32881] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:14:35,214 WARN [Listener at localhost/32881] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:14:35,217 WARN [Listener at localhost/32881] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:14:35,219 INFO [Listener at localhost/32881] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:14:35,223 INFO [Listener at localhost/32881] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/Jetty_localhost_35965_datanode____rh0b25/webapp 2023-07-18 12:14:35,367 INFO [Listener at localhost/32881] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35965 2023-07-18 12:14:35,394 WARN [Listener at localhost/36415] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:14:35,432 WARN [Listener at localhost/36415] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:14:35,436 WARN [Listener at localhost/36415] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:14:35,437 INFO [Listener at localhost/36415] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:14:35,443 INFO [Listener at localhost/36415] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/Jetty_localhost_34599_datanode____xr8gl7/webapp 2023-07-18 12:14:35,607 INFO [Listener at localhost/36415] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34599 2023-07-18 12:14:35,673 WARN [Listener at localhost/37687] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:14:35,924 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1fb67b7f9db2169d: Processing first storage report for DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5 from datanode 69068bff-c55e-463e-91c5-67412dc24480 2023-07-18 12:14:35,926 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1fb67b7f9db2169d: from storage DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5 node DatanodeRegistration(127.0.0.1:43123, datanodeUuid=69068bff-c55e-463e-91c5-67412dc24480, infoPort=37283, infoSecurePort=0, ipcPort=36415, storageInfo=lv=-57;cid=testClusterID;nsid=257403860;c=1689682473336), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-18 12:14:35,926 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaf13a6670f0761ca: Processing first storage report for DS-bb0055bf-2583-488f-88cd-6e67586120a0 from datanode 4562afda-89af-40f7-b2a1-8b6da745d53c 2023-07-18 12:14:35,926 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaf13a6670f0761ca: from storage DS-bb0055bf-2583-488f-88cd-6e67586120a0 node DatanodeRegistration(127.0.0.1:35987, datanodeUuid=4562afda-89af-40f7-b2a1-8b6da745d53c, infoPort=34165, infoSecurePort=0, ipcPort=32881, storageInfo=lv=-57;cid=testClusterID;nsid=257403860;c=1689682473336), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:14:35,926 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8ed0d74e0aeb82f6: Processing first storage report for DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7 from datanode b9d02080-3f04-4581-8baf-f681a8a8cfcf 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8ed0d74e0aeb82f6: from storage DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7 node DatanodeRegistration(127.0.0.1:43097, datanodeUuid=b9d02080-3f04-4581-8baf-f681a8a8cfcf, infoPort=43027, infoSecurePort=0, ipcPort=37687, storageInfo=lv=-57;cid=testClusterID;nsid=257403860;c=1689682473336), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1fb67b7f9db2169d: Processing first storage report for DS-ea7e4fb9-abc1-4775-b9dc-f53890da3af4 from datanode 69068bff-c55e-463e-91c5-67412dc24480 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1fb67b7f9db2169d: from storage DS-ea7e4fb9-abc1-4775-b9dc-f53890da3af4 node DatanodeRegistration(127.0.0.1:43123, datanodeUuid=69068bff-c55e-463e-91c5-67412dc24480, infoPort=37283, infoSecurePort=0, ipcPort=36415, storageInfo=lv=-57;cid=testClusterID;nsid=257403860;c=1689682473336), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaf13a6670f0761ca: Processing first storage report for DS-d43e89bf-a50e-4900-82a1-03b1b8826553 from datanode 4562afda-89af-40f7-b2a1-8b6da745d53c 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaf13a6670f0761ca: from storage DS-d43e89bf-a50e-4900-82a1-03b1b8826553 node DatanodeRegistration(127.0.0.1:35987, datanodeUuid=4562afda-89af-40f7-b2a1-8b6da745d53c, infoPort=34165, infoSecurePort=0, ipcPort=32881, storageInfo=lv=-57;cid=testClusterID;nsid=257403860;c=1689682473336), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8ed0d74e0aeb82f6: Processing first storage report for DS-afeb6af7-cdda-40b9-825c-4c04d054aaec from datanode b9d02080-3f04-4581-8baf-f681a8a8cfcf 2023-07-18 12:14:35,927 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8ed0d74e0aeb82f6: from storage DS-afeb6af7-cdda-40b9-825c-4c04d054aaec node DatanodeRegistration(127.0.0.1:43097, datanodeUuid=b9d02080-3f04-4581-8baf-f681a8a8cfcf, infoPort=43027, infoSecurePort=0, ipcPort=37687, storageInfo=lv=-57;cid=testClusterID;nsid=257403860;c=1689682473336), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:14:36,142 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d 2023-07-18 12:14:36,215 INFO [Listener at localhost/37687] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/zookeeper_0, clientPort=50805, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 12:14:36,233 INFO [Listener at localhost/37687] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50805 2023-07-18 12:14:36,245 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:36,247 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:36,965 INFO [Listener at localhost/37687] util.FSUtils(471): Created version file at hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae with version=8 2023-07-18 12:14:36,966 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/hbase-staging 2023-07-18 12:14:36,976 DEBUG [Listener at localhost/37687] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 12:14:36,977 DEBUG [Listener at localhost/37687] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 12:14:36,977 DEBUG [Listener at localhost/37687] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 12:14:36,977 DEBUG [Listener at localhost/37687] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 12:14:37,442 INFO [Listener at localhost/37687] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-18 12:14:38,036 INFO [Listener at localhost/37687] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:14:38,086 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:38,087 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:38,088 INFO [Listener at localhost/37687] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:14:38,088 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:38,088 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:14:38,246 INFO [Listener at localhost/37687] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:14:38,326 DEBUG [Listener at localhost/37687] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-18 12:14:38,446 INFO [Listener at localhost/37687] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36151 2023-07-18 12:14:38,456 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:38,458 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:38,480 INFO [Listener at localhost/37687] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36151 connecting to ZooKeeper ensemble=127.0.0.1:50805 2023-07-18 12:14:38,529 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:361510x0, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:14:38,533 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36151-0x101785affaa0000 connected 2023-07-18 12:14:38,660 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:14:38,662 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:14:38,667 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:14:38,718 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36151 2023-07-18 12:14:38,738 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36151 2023-07-18 12:14:38,766 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36151 2023-07-18 12:14:38,777 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36151 2023-07-18 12:14:38,777 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36151 2023-07-18 12:14:38,810 INFO [Listener at localhost/37687] log.Log(170): Logging initialized @7540ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-18 12:14:38,991 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:14:38,992 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:14:38,993 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:14:39,002 INFO [Listener at localhost/37687] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 12:14:39,002 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:14:39,003 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:14:39,007 INFO [Listener at localhost/37687] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:14:39,154 INFO [Listener at localhost/37687] http.HttpServer(1146): Jetty bound to port 34307 2023-07-18 12:14:39,155 INFO [Listener at localhost/37687] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:14:39,185 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,188 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38bddd36{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:14:39,189 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,189 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@680fffdc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:14:39,373 INFO [Listener at localhost/37687] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:14:39,386 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:14:39,386 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:14:39,388 INFO [Listener at localhost/37687] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:14:39,395 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,421 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38aa31da{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/jetty-0_0_0_0-34307-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8079344647421924234/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 12:14:39,434 INFO [Listener at localhost/37687] server.AbstractConnector(333): Started ServerConnector@7e43481b{HTTP/1.1, (http/1.1)}{0.0.0.0:34307} 2023-07-18 12:14:39,434 INFO [Listener at localhost/37687] server.Server(415): Started @8164ms 2023-07-18 12:14:39,437 INFO [Listener at localhost/37687] master.HMaster(444): hbase.rootdir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae, hbase.cluster.distributed=false 2023-07-18 12:14:39,510 INFO [Listener at localhost/37687] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:14:39,510 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,510 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,511 INFO [Listener at localhost/37687] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:14:39,511 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,511 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:14:39,516 INFO [Listener at localhost/37687] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:14:39,519 INFO [Listener at localhost/37687] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35237 2023-07-18 12:14:39,522 INFO [Listener at localhost/37687] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:14:39,530 DEBUG [Listener at localhost/37687] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:14:39,531 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:39,534 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:39,537 INFO [Listener at localhost/37687] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35237 connecting to ZooKeeper ensemble=127.0.0.1:50805 2023-07-18 12:14:39,545 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:352370x0, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:14:39,546 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35237-0x101785affaa0001 connected 2023-07-18 12:14:39,547 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:14:39,548 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:14:39,549 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:14:39,550 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35237 2023-07-18 12:14:39,550 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35237 2023-07-18 12:14:39,551 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35237 2023-07-18 12:14:39,551 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35237 2023-07-18 12:14:39,552 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35237 2023-07-18 12:14:39,554 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:14:39,554 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:14:39,554 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:14:39,555 INFO [Listener at localhost/37687] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:14:39,556 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:14:39,556 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:14:39,556 INFO [Listener at localhost/37687] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:14:39,559 INFO [Listener at localhost/37687] http.HttpServer(1146): Jetty bound to port 40089 2023-07-18 12:14:39,559 INFO [Listener at localhost/37687] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:14:39,575 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,575 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@249e2011{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:14:39,576 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,576 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a6a072{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:14:39,696 INFO [Listener at localhost/37687] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:14:39,698 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:14:39,698 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:14:39,698 INFO [Listener at localhost/37687] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 12:14:39,700 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,705 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2a1b55bd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/jetty-0_0_0_0-40089-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3730840035367216652/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:14:39,706 INFO [Listener at localhost/37687] server.AbstractConnector(333): Started ServerConnector@4cab7999{HTTP/1.1, (http/1.1)}{0.0.0.0:40089} 2023-07-18 12:14:39,706 INFO [Listener at localhost/37687] server.Server(415): Started @8436ms 2023-07-18 12:14:39,722 INFO [Listener at localhost/37687] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:14:39,722 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,722 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,723 INFO [Listener at localhost/37687] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:14:39,723 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,723 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:14:39,723 INFO [Listener at localhost/37687] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:14:39,729 INFO [Listener at localhost/37687] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41985 2023-07-18 12:14:39,730 INFO [Listener at localhost/37687] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:14:39,734 DEBUG [Listener at localhost/37687] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:14:39,735 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:39,737 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:39,738 INFO [Listener at localhost/37687] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41985 connecting to ZooKeeper ensemble=127.0.0.1:50805 2023-07-18 12:14:39,742 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:419850x0, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:14:39,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41985-0x101785affaa0002 connected 2023-07-18 12:14:39,744 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:14:39,745 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:14:39,746 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:14:39,747 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41985 2023-07-18 12:14:39,747 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41985 2023-07-18 12:14:39,747 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41985 2023-07-18 12:14:39,748 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41985 2023-07-18 12:14:39,749 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41985 2023-07-18 12:14:39,751 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:14:39,751 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:14:39,751 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:14:39,752 INFO [Listener at localhost/37687] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:14:39,752 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:14:39,752 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:14:39,752 INFO [Listener at localhost/37687] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:14:39,753 INFO [Listener at localhost/37687] http.HttpServer(1146): Jetty bound to port 34415 2023-07-18 12:14:39,753 INFO [Listener at localhost/37687] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:14:39,760 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,761 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2afce463{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:14:39,761 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,762 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ec386b4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:14:39,928 INFO [Listener at localhost/37687] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:14:39,929 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:14:39,929 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:14:39,929 INFO [Listener at localhost/37687] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:14:39,931 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,931 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4d520d27{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/jetty-0_0_0_0-34415-hbase-server-2_4_18-SNAPSHOT_jar-_-any-677803030807334919/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:14:39,933 INFO [Listener at localhost/37687] server.AbstractConnector(333): Started ServerConnector@72b0dcfa{HTTP/1.1, (http/1.1)}{0.0.0.0:34415} 2023-07-18 12:14:39,933 INFO [Listener at localhost/37687] server.Server(415): Started @8663ms 2023-07-18 12:14:39,948 INFO [Listener at localhost/37687] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:14:39,948 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,948 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,948 INFO [Listener at localhost/37687] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:14:39,949 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:39,949 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:14:39,949 INFO [Listener at localhost/37687] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:14:39,951 INFO [Listener at localhost/37687] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44601 2023-07-18 12:14:39,951 INFO [Listener at localhost/37687] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:14:39,953 DEBUG [Listener at localhost/37687] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:14:39,954 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:39,955 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:39,957 INFO [Listener at localhost/37687] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44601 connecting to ZooKeeper ensemble=127.0.0.1:50805 2023-07-18 12:14:39,961 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:446010x0, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:14:39,963 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44601-0x101785affaa0003 connected 2023-07-18 12:14:39,963 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:14:39,964 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:14:39,965 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:14:39,965 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44601 2023-07-18 12:14:39,966 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44601 2023-07-18 12:14:39,966 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44601 2023-07-18 12:14:39,967 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44601 2023-07-18 12:14:39,967 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44601 2023-07-18 12:14:39,971 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:14:39,971 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:14:39,971 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:14:39,972 INFO [Listener at localhost/37687] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:14:39,972 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:14:39,973 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:14:39,973 INFO [Listener at localhost/37687] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:14:39,974 INFO [Listener at localhost/37687] http.HttpServer(1146): Jetty bound to port 44963 2023-07-18 12:14:39,974 INFO [Listener at localhost/37687] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:14:39,976 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,976 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@477c886b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:14:39,976 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:39,977 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5526bfb1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:14:40,096 INFO [Listener at localhost/37687] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:14:40,097 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:14:40,097 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:14:40,097 INFO [Listener at localhost/37687] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:14:40,099 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:40,099 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36c7be16{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/jetty-0_0_0_0-44963-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8854895045045088151/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:14:40,101 INFO [Listener at localhost/37687] server.AbstractConnector(333): Started ServerConnector@58b8c90a{HTTP/1.1, (http/1.1)}{0.0.0.0:44963} 2023-07-18 12:14:40,101 INFO [Listener at localhost/37687] server.Server(415): Started @8831ms 2023-07-18 12:14:40,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:14:40,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@44cc3892{HTTP/1.1, (http/1.1)}{0.0.0.0:34291} 2023-07-18 12:14:40,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8840ms 2023-07-18 12:14:40,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:40,120 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 12:14:40,121 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:40,139 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:14:40,139 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:14:40,139 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:14:40,139 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:14:40,140 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:40,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:14:40,142 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36151,1689682477215 from backup master directory 2023-07-18 12:14:40,143 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:14:40,148 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:40,148 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 12:14:40,148 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:14:40,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:40,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-18 12:14:40,154 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-18 12:14:40,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/hbase.id with ID: e5be9d35-8260-456d-9f60-42f56ac29974 2023-07-18 12:14:40,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:40,323 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:40,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3c9cf855 to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:40,433 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66fd0568, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:40,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:40,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 12:14:40,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-18 12:14:40,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-18 12:14:40,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 12:14:40,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 12:14:40,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:40,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store-tmp 2023-07-18 12:14:40,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:40,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 12:14:40,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:14:40,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:14:40,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 12:14:40,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:14:40,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:14:40,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:14:40,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/WALs/jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:40,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36151%2C1689682477215, suffix=, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/WALs/jenkins-hbase4.apache.org,36151,1689682477215, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/oldWALs, maxLogs=10 2023-07-18 12:14:40,739 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:40,739 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:40,739 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:40,763 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-18 12:14:40,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/WALs/jenkins-hbase4.apache.org,36151,1689682477215/jenkins-hbase4.apache.org%2C36151%2C1689682477215.1689682480656 2023-07-18 12:14:40,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK], DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK], DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK]] 2023-07-18 12:14:40,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:40,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:40,860 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:14:40,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:14:40,935 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:14:40,941 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 12:14:40,972 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 12:14:40,987 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:40,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:14:40,996 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:14:41,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:14:41,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:41,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11171546080, jitterRate=0.04043130576610565}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:41,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:14:41,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 12:14:41,060 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 12:14:41,060 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 12:14:41,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 12:14:41,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-18 12:14:41,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 53 msec 2023-07-18 12:14:41,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 12:14:41,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 12:14:41,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 12:14:41,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 12:14:41,163 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 12:14:41,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 12:14:41,170 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:41,171 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 12:14:41,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 12:14:41,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 12:14:41,190 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:14:41,190 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:14:41,190 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:41,190 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:14:41,190 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:14:41,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36151,1689682477215, sessionid=0x101785affaa0000, setting cluster-up flag (Was=false) 2023-07-18 12:14:41,210 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:41,214 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 12:14:41,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:41,221 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:41,227 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 12:14:41,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:41,230 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.hbase-snapshot/.tmp 2023-07-18 12:14:41,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 12:14:41,305 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(951): ClusterId : e5be9d35-8260-456d-9f60-42f56ac29974 2023-07-18 12:14:41,305 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(951): ClusterId : e5be9d35-8260-456d-9f60-42f56ac29974 2023-07-18 12:14:41,305 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(951): ClusterId : e5be9d35-8260-456d-9f60-42f56ac29974 2023-07-18 12:14:41,310 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 12:14:41,311 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:14:41,312 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:14:41,311 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:14:41,311 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:14:41,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 12:14:41,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 12:14:41,318 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:14:41,318 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:14:41,318 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:14:41,318 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:14:41,318 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:14:41,318 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:14:41,323 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:14:41,323 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:14:41,323 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:14:41,325 DEBUG [RS:0;jenkins-hbase4:35237] zookeeper.ReadOnlyZKClient(139): Connect 0x15bb4820 to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:41,326 DEBUG [RS:2;jenkins-hbase4:44601] zookeeper.ReadOnlyZKClient(139): Connect 0x5739a2bd to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:41,325 DEBUG [RS:1;jenkins-hbase4:41985] zookeeper.ReadOnlyZKClient(139): Connect 0x0fb36c19 to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:41,335 DEBUG [RS:2;jenkins-hbase4:44601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@123b363e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:41,335 DEBUG [RS:1;jenkins-hbase4:41985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d0a198e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:41,336 DEBUG [RS:0;jenkins-hbase4:35237] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64460ddb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:41,336 DEBUG [RS:2;jenkins-hbase4:44601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41e70a15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:14:41,336 DEBUG [RS:1;jenkins-hbase4:41985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4dc86b69, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:14:41,336 DEBUG [RS:0;jenkins-hbase4:35237] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a0bd615, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:14:41,359 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41985 2023-07-18 12:14:41,360 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44601 2023-07-18 12:14:41,364 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35237 2023-07-18 12:14:41,365 INFO [RS:1;jenkins-hbase4:41985] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:14:41,365 INFO [RS:2;jenkins-hbase4:44601] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:14:41,367 INFO [RS:2;jenkins-hbase4:44601] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:14:41,365 INFO [RS:0;jenkins-hbase4:35237] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:14:41,368 INFO [RS:0;jenkins-hbase4:35237] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:14:41,368 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:14:41,367 INFO [RS:1;jenkins-hbase4:41985] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:14:41,368 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:14:41,368 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:14:41,371 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:35237, startcode=1689682479509 2023-07-18 12:14:41,371 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:41985, startcode=1689682479721 2023-07-18 12:14:41,371 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:44601, startcode=1689682479947 2023-07-18 12:14:41,392 DEBUG [RS:1;jenkins-hbase4:41985] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:14:41,392 DEBUG [RS:0;jenkins-hbase4:35237] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:14:41,392 DEBUG [RS:2;jenkins-hbase4:44601] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:14:41,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 12:14:41,471 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35665, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:14:41,471 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54573, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:14:41,471 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53791, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:14:41,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 12:14:41,481 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:41,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 12:14:41,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 12:14:41,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 12:14:41,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:14:41,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689682511486 2023-07-18 12:14:41,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 12:14:41,495 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:41,497 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:41,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 12:14:41,501 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 12:14:41,503 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 12:14:41,505 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:41,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 12:14:41,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 12:14:41,516 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 12:14:41,516 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 12:14:41,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 12:14:41,519 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 12:14:41,519 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 12:14:41,520 WARN [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 12:14:41,519 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 12:14:41,520 WARN [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 12:14:41,520 WARN [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 12:14:41,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 12:14:41,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 12:14:41,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 12:14:41,525 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 12:14:41,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682481527,5,FailOnTimeoutGroup] 2023-07-18 12:14:41,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682481528,5,FailOnTimeoutGroup] 2023-07-18 12:14:41,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 12:14:41,530 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,576 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:41,577 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:41,578 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae 2023-07-18 12:14:41,605 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:41,607 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:14:41,610 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:41,611 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:14:41,612 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:41,612 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:14:41,615 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:41,616 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:14:41,616 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:41,617 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:14:41,619 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:41,620 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:14:41,621 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:41,621 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:44601, startcode=1689682479947 2023-07-18 12:14:41,621 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:35237, startcode=1689682479509 2023-07-18 12:14:41,622 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:41985, startcode=1689682479721 2023-07-18 12:14:41,623 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:41,623 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:41,629 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,629 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:14:41,630 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:14:41,631 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 12:14:41,632 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:14:41,638 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,639 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:14:41,640 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 12:14:41,640 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae 2023-07-18 12:14:41,640 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46497 2023-07-18 12:14:41,641 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34307 2023-07-18 12:14:41,641 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:41,641 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,641 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:14:41,641 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 12:14:41,644 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae 2023-07-18 12:14:41,644 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46497 2023-07-18 12:14:41,644 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34307 2023-07-18 12:14:41,644 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9975380160, jitterRate=-0.0709703266620636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:14:41,645 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:14:41,645 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:14:41,645 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:14:41,645 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:14:41,645 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:14:41,645 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:14:41,646 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:14:41,646 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:14:41,654 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:14:41,655 DEBUG [RS:0;jenkins-hbase4:35237] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,655 DEBUG [RS:1;jenkins-hbase4:41985] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,655 WARN [RS:0;jenkins-hbase4:35237] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:14:41,655 INFO [RS:0;jenkins-hbase4:35237] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:41,656 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,655 WARN [RS:1;jenkins-hbase4:41985] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:14:41,657 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae 2023-07-18 12:14:41,657 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46497 2023-07-18 12:14:41,657 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34307 2023-07-18 12:14:41,657 INFO [RS:1;jenkins-hbase4:41985] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:41,659 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 12:14:41,663 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,663 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 12:14:41,666 DEBUG [RS:2;jenkins-hbase4:44601] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,666 WARN [RS:2;jenkins-hbase4:44601] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:14:41,666 INFO [RS:2;jenkins-hbase4:44601] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:41,666 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,667 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35237,1689682479509] 2023-07-18 12:14:41,667 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41985,1689682479721] 2023-07-18 12:14:41,667 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44601,1689682479947] 2023-07-18 12:14:41,679 DEBUG [RS:2;jenkins-hbase4:44601] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,679 DEBUG [RS:1;jenkins-hbase4:41985] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,679 DEBUG [RS:0;jenkins-hbase4:35237] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,680 DEBUG [RS:2;jenkins-hbase4:44601] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,680 DEBUG [RS:0;jenkins-hbase4:35237] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,680 DEBUG [RS:1;jenkins-hbase4:41985] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,680 DEBUG [RS:2;jenkins-hbase4:44601] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,680 DEBUG [RS:1;jenkins-hbase4:41985] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,680 DEBUG [RS:0;jenkins-hbase4:35237] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 12:14:41,692 DEBUG [RS:1;jenkins-hbase4:41985] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:14:41,692 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:14:41,692 DEBUG [RS:0;jenkins-hbase4:35237] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:14:41,694 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 12:14:41,697 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 12:14:41,705 INFO [RS:2;jenkins-hbase4:44601] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:14:41,705 INFO [RS:1;jenkins-hbase4:41985] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:14:41,705 INFO [RS:0;jenkins-hbase4:35237] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:14:41,730 INFO [RS:0;jenkins-hbase4:35237] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:14:41,730 INFO [RS:1;jenkins-hbase4:41985] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:14:41,730 INFO [RS:2;jenkins-hbase4:44601] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:14:41,736 INFO [RS:1;jenkins-hbase4:41985] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:14:41,736 INFO [RS:2;jenkins-hbase4:44601] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:14:41,736 INFO [RS:0;jenkins-hbase4:35237] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:14:41,737 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,737 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,737 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,739 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:14:41,739 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:14:41,739 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:14:41,748 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,748 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,748 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,748 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,749 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:14:41,750 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:1;jenkins-hbase4:41985] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:2;jenkins-hbase4:44601] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,750 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,751 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,752 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:14:41,752 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,752 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,752 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,752 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,752 DEBUG [RS:0;jenkins-hbase4:35237] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:41,752 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,752 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,755 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,755 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,755 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,757 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,757 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,757 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,776 INFO [RS:2;jenkins-hbase4:44601] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:14:41,776 INFO [RS:0;jenkins-hbase4:35237] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:14:41,776 INFO [RS:1;jenkins-hbase4:41985] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:14:41,779 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35237,1689682479509-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,779 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41985,1689682479721-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,779 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44601,1689682479947-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:41,804 INFO [RS:0;jenkins-hbase4:35237] regionserver.Replication(203): jenkins-hbase4.apache.org,35237,1689682479509 started 2023-07-18 12:14:41,804 INFO [RS:1;jenkins-hbase4:41985] regionserver.Replication(203): jenkins-hbase4.apache.org,41985,1689682479721 started 2023-07-18 12:14:41,804 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35237,1689682479509, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35237, sessionid=0x101785affaa0001 2023-07-18 12:14:41,804 INFO [RS:2;jenkins-hbase4:44601] regionserver.Replication(203): jenkins-hbase4.apache.org,44601,1689682479947 started 2023-07-18 12:14:41,804 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41985,1689682479721, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41985, sessionid=0x101785affaa0002 2023-07-18 12:14:41,804 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44601,1689682479947, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44601, sessionid=0x101785affaa0003 2023-07-18 12:14:41,804 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:14:41,805 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:14:41,805 DEBUG [RS:0;jenkins-hbase4:35237] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,804 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:14:41,805 DEBUG [RS:0;jenkins-hbase4:35237] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35237,1689682479509' 2023-07-18 12:14:41,805 DEBUG [RS:2;jenkins-hbase4:44601] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,805 DEBUG [RS:0;jenkins-hbase4:35237] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:14:41,805 DEBUG [RS:1;jenkins-hbase4:41985] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,805 DEBUG [RS:2;jenkins-hbase4:44601] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44601,1689682479947' 2023-07-18 12:14:41,807 DEBUG [RS:2;jenkins-hbase4:44601] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:14:41,807 DEBUG [RS:1;jenkins-hbase4:41985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41985,1689682479721' 2023-07-18 12:14:41,807 DEBUG [RS:1;jenkins-hbase4:41985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:14:41,807 DEBUG [RS:0;jenkins-hbase4:35237] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:14:41,807 DEBUG [RS:2;jenkins-hbase4:44601] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:14:41,808 DEBUG [RS:1;jenkins-hbase4:41985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:14:41,808 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:14:41,808 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:14:41,808 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:14:41,808 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:14:41,808 DEBUG [RS:2;jenkins-hbase4:44601] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:41,808 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:14:41,808 DEBUG [RS:2;jenkins-hbase4:44601] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44601,1689682479947' 2023-07-18 12:14:41,808 DEBUG [RS:2;jenkins-hbase4:44601] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:14:41,808 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:14:41,808 DEBUG [RS:1;jenkins-hbase4:41985] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:41,809 DEBUG [RS:1;jenkins-hbase4:41985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41985,1689682479721' 2023-07-18 12:14:41,809 DEBUG [RS:1;jenkins-hbase4:41985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:14:41,809 DEBUG [RS:0;jenkins-hbase4:35237] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:41,809 DEBUG [RS:0;jenkins-hbase4:35237] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35237,1689682479509' 2023-07-18 12:14:41,809 DEBUG [RS:0;jenkins-hbase4:35237] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:14:41,809 DEBUG [RS:2;jenkins-hbase4:44601] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:14:41,809 DEBUG [RS:1;jenkins-hbase4:41985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:14:41,809 DEBUG [RS:0;jenkins-hbase4:35237] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:14:41,809 DEBUG [RS:2;jenkins-hbase4:44601] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:14:41,810 INFO [RS:2;jenkins-hbase4:44601] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:14:41,810 INFO [RS:2;jenkins-hbase4:44601] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:14:41,810 DEBUG [RS:1;jenkins-hbase4:41985] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:14:41,810 DEBUG [RS:0;jenkins-hbase4:35237] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:14:41,810 INFO [RS:1;jenkins-hbase4:41985] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:14:41,810 INFO [RS:0;jenkins-hbase4:35237] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:14:41,810 INFO [RS:0;jenkins-hbase4:35237] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:14:41,810 INFO [RS:1;jenkins-hbase4:41985] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:14:41,849 DEBUG [jenkins-hbase4:36151] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 12:14:41,863 DEBUG [jenkins-hbase4:36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:41,864 DEBUG [jenkins-hbase4:36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:41,864 DEBUG [jenkins-hbase4:36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:41,865 DEBUG [jenkins-hbase4:36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:41,865 DEBUG [jenkins-hbase4:36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:41,868 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35237,1689682479509, state=OPENING 2023-07-18 12:14:41,879 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 12:14:41,881 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:41,882 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:41,886 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:41,921 INFO [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44601%2C1689682479947, suffix=, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44601,1689682479947, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:41,921 INFO [RS:1;jenkins-hbase4:41985] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41985%2C1689682479721, suffix=, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,41985,1689682479721, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:41,921 INFO [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35237%2C1689682479509, suffix=, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,35237,1689682479509, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:41,949 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:41,949 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:41,949 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:41,949 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:41,949 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:41,950 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:41,963 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:41,963 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:41,965 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:41,970 INFO [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,35237,1689682479509/jenkins-hbase4.apache.org%2C35237%2C1689682479509.1689682481926 2023-07-18 12:14:41,970 INFO [RS:1;jenkins-hbase4:41985] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,41985,1689682479721/jenkins-hbase4.apache.org%2C41985%2C1689682479721.1689682481926 2023-07-18 12:14:41,970 DEBUG [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK], DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK], DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK]] 2023-07-18 12:14:41,970 DEBUG [RS:1;jenkins-hbase4:41985] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK], DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK], DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK]] 2023-07-18 12:14:41,972 INFO [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44601,1689682479947/jenkins-hbase4.apache.org%2C44601%2C1689682479947.1689682481926 2023-07-18 12:14:41,972 DEBUG [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK], DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK], DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK]] 2023-07-18 12:14:42,070 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:42,073 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:42,077 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55510, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:42,081 WARN [ReadOnlyZKClient-127.0.0.1:50805@0x3c9cf855] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 12:14:42,096 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 12:14:42,096 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:42,104 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35237%2C1689682479509.meta, suffix=.meta, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,35237,1689682479509, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:42,120 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:42,130 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:42,130 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:42,130 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:42,131 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55520, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:42,132 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35237] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:55520 deadline: 1689682542131, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:42,139 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,35237,1689682479509/jenkins-hbase4.apache.org%2C35237%2C1689682479509.meta.1689682482106.meta 2023-07-18 12:14:42,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK], DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK], DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK]] 2023-07-18 12:14:42,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:42,144 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:14:42,147 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 12:14:42,149 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 12:14:42,155 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 12:14:42,155 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:42,155 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 12:14:42,155 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 12:14:42,158 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:14:42,160 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:42,160 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:42,161 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:14:42,162 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:42,162 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:14:42,164 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:42,164 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:42,165 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:14:42,166 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:42,166 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:14:42,167 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:42,167 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:42,168 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:14:42,169 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:42,170 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:42,173 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:42,177 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:14:42,180 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:14:42,184 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11934733440, jitterRate=0.11150866746902466}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:14:42,184 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:14:42,194 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689682482062 2023-07-18 12:14:42,219 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 12:14:42,220 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 12:14:42,221 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35237,1689682479509, state=OPEN 2023-07-18 12:14:42,224 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:42,224 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:42,229 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 12:14:42,229 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35237,1689682479509 in 338 msec 2023-07-18 12:14:42,236 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 12:14:42,236 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 550 msec 2023-07-18 12:14:42,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 918 msec 2023-07-18 12:14:42,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689682482242, completionTime=-1 2023-07-18 12:14:42,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 12:14:42,242 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 12:14:42,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 12:14:42,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689682542300 2023-07-18 12:14:42,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689682602300 2023-07-18 12:14:42,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 57 msec 2023-07-18 12:14:42,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36151,1689682477215-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:42,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36151,1689682477215-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:42,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36151,1689682477215-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:42,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36151, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:42,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:42,337 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 12:14:42,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 12:14:42,351 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:42,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 12:14:42,366 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:42,369 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:42,388 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,392 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e empty. 2023-07-18 12:14:42,393 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,393 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 12:14:42,440 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:42,442 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c0115abc37809fbbb5bf11832155875e, NAME => 'hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:42,461 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:42,461 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c0115abc37809fbbb5bf11832155875e, disabling compactions & flushes 2023-07-18 12:14:42,461 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,461 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,462 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. after waiting 0 ms 2023-07-18 12:14:42,462 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,462 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,462 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c0115abc37809fbbb5bf11832155875e: 2023-07-18 12:14:42,466 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:42,484 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682482469"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682482469"}]},"ts":"1689682482469"} 2023-07-18 12:14:42,516 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:14:42,519 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:42,526 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682482520"}]},"ts":"1689682482520"} 2023-07-18 12:14:42,531 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 12:14:42,535 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:42,536 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:42,536 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:42,536 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:42,536 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:42,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c0115abc37809fbbb5bf11832155875e, ASSIGN}] 2023-07-18 12:14:42,543 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c0115abc37809fbbb5bf11832155875e, ASSIGN 2023-07-18 12:14:42,545 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c0115abc37809fbbb5bf11832155875e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:42,659 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:42,662 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 12:14:42,666 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:42,669 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:42,675 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:42,676 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba empty. 2023-07-18 12:14:42,677 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:42,677 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 12:14:42,696 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:42,700 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c0115abc37809fbbb5bf11832155875e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:42,702 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682482699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682482699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682482699"}]},"ts":"1689682482699"} 2023-07-18 12:14:42,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure c0115abc37809fbbb5bf11832155875e, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:42,714 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:42,716 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 521b60f74d0b1bace698944d2a6d3bba, NAME => 'hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:42,757 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:42,757 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 521b60f74d0b1bace698944d2a6d3bba, disabling compactions & flushes 2023-07-18 12:14:42,757 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:42,757 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:42,757 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. after waiting 0 ms 2023-07-18 12:14:42,758 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:42,758 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:42,758 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 521b60f74d0b1bace698944d2a6d3bba: 2023-07-18 12:14:42,762 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:42,764 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682482764"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682482764"}]},"ts":"1689682482764"} 2023-07-18 12:14:42,767 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:14:42,769 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:42,769 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682482769"}]},"ts":"1689682482769"} 2023-07-18 12:14:42,774 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 12:14:42,780 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:42,780 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:42,780 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:42,780 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:42,780 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:42,781 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=521b60f74d0b1bace698944d2a6d3bba, ASSIGN}] 2023-07-18 12:14:42,784 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=521b60f74d0b1bace698944d2a6d3bba, ASSIGN 2023-07-18 12:14:42,786 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=521b60f74d0b1bace698944d2a6d3bba, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:42,869 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:42,869 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:42,873 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43364, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:42,880 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,880 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c0115abc37809fbbb5bf11832155875e, NAME => 'hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:42,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:42,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,882 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,888 INFO [StoreOpener-c0115abc37809fbbb5bf11832155875e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,895 DEBUG [StoreOpener-c0115abc37809fbbb5bf11832155875e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/info 2023-07-18 12:14:42,895 DEBUG [StoreOpener-c0115abc37809fbbb5bf11832155875e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/info 2023-07-18 12:14:42,896 INFO [StoreOpener-c0115abc37809fbbb5bf11832155875e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c0115abc37809fbbb5bf11832155875e columnFamilyName info 2023-07-18 12:14:42,897 INFO [StoreOpener-c0115abc37809fbbb5bf11832155875e-1] regionserver.HStore(310): Store=c0115abc37809fbbb5bf11832155875e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:42,898 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,900 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,909 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c0115abc37809fbbb5bf11832155875e 2023-07-18 12:14:42,914 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:42,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c0115abc37809fbbb5bf11832155875e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10414008480, jitterRate=-0.0301198810338974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:42,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c0115abc37809fbbb5bf11832155875e: 2023-07-18 12:14:42,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e., pid=7, masterSystemTime=1689682482869 2023-07-18 12:14:42,923 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,923 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:14:42,925 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c0115abc37809fbbb5bf11832155875e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:42,925 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682482924"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682482924"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682482924"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682482924"}]},"ts":"1689682482924"} 2023-07-18 12:14:42,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 12:14:42,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure c0115abc37809fbbb5bf11832155875e, server=jenkins-hbase4.apache.org,44601,1689682479947 in 220 msec 2023-07-18 12:14:42,937 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:42,939 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=521b60f74d0b1bace698944d2a6d3bba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:42,939 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682482939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682482939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682482939"}]},"ts":"1689682482939"} 2023-07-18 12:14:42,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 12:14:42,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c0115abc37809fbbb5bf11832155875e, ASSIGN in 396 msec 2023-07-18 12:14:42,944 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:42,945 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682482944"}]},"ts":"1689682482944"} 2023-07-18 12:14:42,945 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 521b60f74d0b1bace698944d2a6d3bba, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:42,948 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 12:14:42,955 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:42,959 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 603 msec 2023-07-18 12:14:42,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 12:14:42,967 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:14:42,967 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:42,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:42,995 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43366, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:43,013 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 12:14:43,036 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:14:43,046 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 42 msec 2023-07-18 12:14:43,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 12:14:43,063 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-18 12:14:43,063 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 12:14:43,108 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:43,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 521b60f74d0b1bace698944d2a6d3bba, NAME => 'hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:43,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:14:43,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. service=MultiRowMutationService 2023-07-18 12:14:43,110 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 12:14:43,110 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,110 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:43,110 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,110 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,114 INFO [StoreOpener-521b60f74d0b1bace698944d2a6d3bba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,117 DEBUG [StoreOpener-521b60f74d0b1bace698944d2a6d3bba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/m 2023-07-18 12:14:43,117 DEBUG [StoreOpener-521b60f74d0b1bace698944d2a6d3bba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/m 2023-07-18 12:14:43,118 INFO [StoreOpener-521b60f74d0b1bace698944d2a6d3bba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 521b60f74d0b1bace698944d2a6d3bba columnFamilyName m 2023-07-18 12:14:43,119 INFO [StoreOpener-521b60f74d0b1bace698944d2a6d3bba-1] regionserver.HStore(310): Store=521b60f74d0b1bace698944d2a6d3bba/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:43,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:14:43,133 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:43,134 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 521b60f74d0b1bace698944d2a6d3bba; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7be54b7a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:43,134 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 521b60f74d0b1bace698944d2a6d3bba: 2023-07-18 12:14:43,136 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba., pid=9, masterSystemTime=1689682483102 2023-07-18 12:14:43,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:43,139 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:14:43,140 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=521b60f74d0b1bace698944d2a6d3bba, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:43,140 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682483140"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682483140"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682483140"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682483140"}]},"ts":"1689682483140"} 2023-07-18 12:14:43,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-18 12:14:43,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 521b60f74d0b1bace698944d2a6d3bba, server=jenkins-hbase4.apache.org,44601,1689682479947 in 199 msec 2023-07-18 12:14:43,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 12:14:43,158 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=521b60f74d0b1bace698944d2a6d3bba, ASSIGN in 367 msec 2023-07-18 12:14:43,182 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:14:43,195 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 130 msec 2023-07-18 12:14:43,197 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:43,198 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682483197"}]},"ts":"1689682483197"} 2023-07-18 12:14:43,200 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 12:14:43,206 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:43,212 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 12:14:43,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 547 msec 2023-07-18 12:14:43,215 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 12:14:43,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.067sec 2023-07-18 12:14:43,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 12:14:43,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 12:14:43,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 12:14:43,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36151,1689682477215-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 12:14:43,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36151,1689682477215-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 12:14:43,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 12:14:43,272 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 12:14:43,272 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 12:14:43,316 DEBUG [Listener at localhost/37687] zookeeper.ReadOnlyZKClient(139): Connect 0x3e4d79c0 to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:43,342 DEBUG [Listener at localhost/37687] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@326b7986, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:43,377 DEBUG [hconnection-0x497c82a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:43,378 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:43,378 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:43,382 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:14:43,391 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 12:14:43,399 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55524, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:43,414 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:14:43,416 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:43,429 DEBUG [Listener at localhost/37687] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 12:14:43,433 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51504, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 12:14:43,451 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 12:14:43,451 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:14:43,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 12:14:43,459 DEBUG [Listener at localhost/37687] zookeeper.ReadOnlyZKClient(139): Connect 0x13aa6d9f to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:43,473 DEBUG [Listener at localhost/37687] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30296f6d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:43,473 INFO [Listener at localhost/37687] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50805 2023-07-18 12:14:43,484 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:14:43,488 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101785affaa000a connected 2023-07-18 12:14:43,517 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=418, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=429, ProcessCount=176, AvailableMemoryMB=3781 2023-07-18 12:14:43,521 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-18 12:14:43,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:43,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:43,607 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 12:14:43,626 INFO [Listener at localhost/37687] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:14:43,626 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:43,627 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:43,627 INFO [Listener at localhost/37687] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:14:43,627 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:14:43,627 INFO [Listener at localhost/37687] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:14:43,627 INFO [Listener at localhost/37687] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:14:43,632 INFO [Listener at localhost/37687] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44567 2023-07-18 12:14:43,633 INFO [Listener at localhost/37687] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:14:43,634 DEBUG [Listener at localhost/37687] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:14:43,636 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:43,641 INFO [Listener at localhost/37687] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:14:43,645 INFO [Listener at localhost/37687] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44567 connecting to ZooKeeper ensemble=127.0.0.1:50805 2023-07-18 12:14:43,650 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:445670x0, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:14:43,652 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44567-0x101785affaa000b connected 2023-07-18 12:14:43,652 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:14:43,653 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 12:14:43,654 DEBUG [Listener at localhost/37687] zookeeper.ZKUtil(164): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:14:43,655 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44567 2023-07-18 12:14:43,658 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44567 2023-07-18 12:14:43,659 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44567 2023-07-18 12:14:43,659 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44567 2023-07-18 12:14:43,660 DEBUG [Listener at localhost/37687] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44567 2023-07-18 12:14:43,662 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:14:43,662 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:14:43,662 INFO [Listener at localhost/37687] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:14:43,663 INFO [Listener at localhost/37687] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:14:43,663 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:14:43,663 INFO [Listener at localhost/37687] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:14:43,663 INFO [Listener at localhost/37687] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:14:43,664 INFO [Listener at localhost/37687] http.HttpServer(1146): Jetty bound to port 36375 2023-07-18 12:14:43,664 INFO [Listener at localhost/37687] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:14:43,670 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:43,670 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@537ec0a8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:14:43,670 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:43,671 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41ed43db{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:14:43,815 INFO [Listener at localhost/37687] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:14:43,816 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:14:43,817 INFO [Listener at localhost/37687] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:14:43,817 INFO [Listener at localhost/37687] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:14:43,860 INFO [Listener at localhost/37687] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:14:43,862 INFO [Listener at localhost/37687] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a482f9e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/java.io.tmpdir/jetty-0_0_0_0-36375-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6890379012857944717/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:14:43,868 INFO [Listener at localhost/37687] server.AbstractConnector(333): Started ServerConnector@614a5820{HTTP/1.1, (http/1.1)}{0.0.0.0:36375} 2023-07-18 12:14:43,868 INFO [Listener at localhost/37687] server.Server(415): Started @12598ms 2023-07-18 12:14:43,872 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(951): ClusterId : e5be9d35-8260-456d-9f60-42f56ac29974 2023-07-18 12:14:43,875 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:14:43,885 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:14:43,885 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:14:43,900 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:14:43,902 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ReadOnlyZKClient(139): Connect 0x65354fdb to 127.0.0.1:50805 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:14:43,915 DEBUG [RS:3;jenkins-hbase4:44567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6bf2ad0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:14:43,916 DEBUG [RS:3;jenkins-hbase4:44567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6fcd24ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:14:43,926 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44567 2023-07-18 12:14:43,926 INFO [RS:3;jenkins-hbase4:44567] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:14:43,926 INFO [RS:3;jenkins-hbase4:44567] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:14:43,926 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:14:43,927 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36151,1689682477215 with isa=jenkins-hbase4.apache.org/172.31.14.131:44567, startcode=1689682483625 2023-07-18 12:14:43,927 DEBUG [RS:3;jenkins-hbase4:44567] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:14:43,932 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59621, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:14:43,933 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36151] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,933 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:14:43,933 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae 2023-07-18 12:14:43,933 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46497 2023-07-18 12:14:43,934 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34307 2023-07-18 12:14:43,938 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:14:43,939 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:14:43,939 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:14:43,939 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:14:43,939 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,940 WARN [RS:3;jenkins-hbase4:44567] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:14:43,940 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44567,1689682483625] 2023-07-18 12:14:43,940 INFO [RS:3;jenkins-hbase4:44567] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:43,940 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:43,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:43,944 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:43,944 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:43,944 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:43,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:43,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:43,946 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:14:43,946 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:43,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:43,950 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:43,950 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36151,1689682477215] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 12:14:43,952 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:43,952 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:43,953 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:43,959 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ZKUtil(162): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:43,960 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:14:43,960 INFO [RS:3;jenkins-hbase4:44567] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:14:43,964 INFO [RS:3;jenkins-hbase4:44567] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:14:43,967 INFO [RS:3;jenkins-hbase4:44567] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:14:43,967 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:43,973 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:14:43,979 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:43,979 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,979 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,979 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,979 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,979 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,979 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:14:43,980 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,980 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,980 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,980 DEBUG [RS:3;jenkins-hbase4:44567] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:14:43,986 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:43,986 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:43,986 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:44,003 INFO [RS:3;jenkins-hbase4:44567] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:14:44,003 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44567,1689682483625-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:14:44,021 INFO [RS:3;jenkins-hbase4:44567] regionserver.Replication(203): jenkins-hbase4.apache.org,44567,1689682483625 started 2023-07-18 12:14:44,021 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44567,1689682483625, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44567, sessionid=0x101785affaa000b 2023-07-18 12:14:44,021 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:14:44,021 DEBUG [RS:3;jenkins-hbase4:44567] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:44,021 DEBUG [RS:3;jenkins-hbase4:44567] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44567,1689682483625' 2023-07-18 12:14:44,021 DEBUG [RS:3;jenkins-hbase4:44567] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:14:44,022 DEBUG [RS:3;jenkins-hbase4:44567] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:14:44,023 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:14:44,023 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:14:44,023 DEBUG [RS:3;jenkins-hbase4:44567] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:44,023 DEBUG [RS:3;jenkins-hbase4:44567] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44567,1689682483625' 2023-07-18 12:14:44,023 DEBUG [RS:3;jenkins-hbase4:44567] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:14:44,024 DEBUG [RS:3;jenkins-hbase4:44567] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:14:44,024 DEBUG [RS:3;jenkins-hbase4:44567] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:14:44,024 INFO [RS:3;jenkins-hbase4:44567] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:14:44,024 INFO [RS:3;jenkins-hbase4:44567] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:14:44,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:44,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:44,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:44,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:44,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:44,047 DEBUG [hconnection-0x120ad869-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:44,054 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:44,061 DEBUG [hconnection-0x120ad869-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:44,070 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43370, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:44,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:44,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:44,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:44,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:44,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51504 deadline: 1689683684084, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:44,086 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:44,088 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:44,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:44,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:44,090 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:44,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:44,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:44,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:44,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:44,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:44,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:44,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:44,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:44,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:44,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:44,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:44,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:44,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:44,128 INFO [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44567%2C1689682483625, suffix=, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44567,1689682483625, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:44,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:44,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:44,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:44,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:44,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:44,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 12:14:44,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 12:14:44,168 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 12:14:44,170 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:44,171 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35237,1689682479509, state=CLOSING 2023-07-18 12:14:44,173 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:44,173 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:44,173 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:44,188 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:44,188 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:44,206 INFO [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44567,1689682483625/jenkins-hbase4.apache.org%2C44567%2C1689682483625.1689682484130 2023-07-18 12:14:44,208 DEBUG [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK], DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK], DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK]] 2023-07-18 12:14:44,347 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 12:14:44,348 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:14:44,348 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:14:44,348 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:14:44,348 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:14:44,348 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:14:44,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.49 KB heapSize=5 KB 2023-07-18 12:14:44,443 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.31 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/info/dab18fbcc5e94104a42c584316cb4eb2 2023-07-18 12:14:44,556 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/table/311afbd109b8425fabe21920058a11b6 2023-07-18 12:14:44,567 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/info/dab18fbcc5e94104a42c584316cb4eb2 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/dab18fbcc5e94104a42c584316cb4eb2 2023-07-18 12:14:44,578 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/dab18fbcc5e94104a42c584316cb4eb2, entries=20, sequenceid=14, filesize=7.0 K 2023-07-18 12:14:44,581 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/table/311afbd109b8425fabe21920058a11b6 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/311afbd109b8425fabe21920058a11b6 2023-07-18 12:14:44,591 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/311afbd109b8425fabe21920058a11b6, entries=4, sequenceid=14, filesize=4.8 K 2023-07-18 12:14:44,594 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.49 KB/2550, heapSize ~4.72 KB/4832, currentSize=0 B/0 for 1588230740 in 245ms, sequenceid=14, compaction requested=false 2023-07-18 12:14:44,596 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 12:14:44,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-18 12:14:44,619 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:14:44,620 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:14:44,620 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:14:44,620 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44567,1689682483625 record at close sequenceid=14 2023-07-18 12:14:44,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 12:14:44,623 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 12:14:44,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 12:14:44,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35237,1689682479509 in 450 msec 2023-07-18 12:14:44,627 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:14:44,777 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:44,778 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44567,1689682483625, state=OPENING 2023-07-18 12:14:44,779 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:44,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:44,779 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:44,933 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:44,933 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:44,936 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36982, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:44,941 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 12:14:44,941 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:44,944 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44567%2C1689682483625.meta, suffix=.meta, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44567,1689682483625, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:44,968 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:44,971 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:44,971 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:44,974 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44567,1689682483625/jenkins-hbase4.apache.org%2C44567%2C1689682483625.meta.1689682484945.meta 2023-07-18 12:14:44,974 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK], DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK], DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK]] 2023-07-18 12:14:44,974 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:44,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:14:44,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 12:14:44,975 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 12:14:44,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 12:14:44,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:44,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 12:14:44,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 12:14:44,978 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:14:44,980 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:44,980 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:44,980 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:14:44,994 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/dab18fbcc5e94104a42c584316cb4eb2 2023-07-18 12:14:44,995 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:44,995 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:14:44,996 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:44,997 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:44,997 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:14:44,998 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:44,998 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:14:44,999 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:44,999 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:44,999 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:14:45,010 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/311afbd109b8425fabe21920058a11b6 2023-07-18 12:14:45,010 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:45,012 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:45,014 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:45,023 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:14:45,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:14:45,028 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10875063520, jitterRate=0.012819215655326843}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:14:45,028 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:14:45,030 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=14, masterSystemTime=1689682484933 2023-07-18 12:14:45,035 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 12:14:45,036 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 12:14:45,037 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44567,1689682483625, state=OPEN 2023-07-18 12:14:45,039 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:45,039 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:45,043 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-18 12:14:45,043 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44567,1689682483625 in 260 msec 2023-07-18 12:14:45,045 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 889 msec 2023-07-18 12:14:45,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-18 12:14:45,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to default 2023-07-18 12:14:45,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:45,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:45,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:45,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:45,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:45,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:45,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:45,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:45,188 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:45,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-18 12:14:45,194 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:45,195 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:45,195 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:45,196 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:45,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:14:45,205 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:45,207 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35237] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Get size: 151 connection: 172.31.14.131:55520 deadline: 1689682545207, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44567 startCode=1689682483625. As of locationSeqNum=14. 2023-07-18 12:14:45,308 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:45,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:14:45,310 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:45,323 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:45,327 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 empty. 2023-07-18 12:14:45,327 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:45,327 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:45,327 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:45,327 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:45,328 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:45,329 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 empty. 2023-07-18 12:14:45,329 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 empty. 2023-07-18 12:14:45,329 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 empty. 2023-07-18 12:14:45,329 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:45,329 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:45,330 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:45,331 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 empty. 2023-07-18 12:14:45,332 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:45,332 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 12:14:45,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:14:45,766 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:45,768 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 69c42c802eb19b3e18523b4f8abd3885, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:45,768 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => da451990537b4adcd2f77ee99d13a424, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:45,769 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 200a6017b0aa493242a0b27c624a2a96, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:45,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:14:45,837 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:45,838 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 200a6017b0aa493242a0b27c624a2a96, disabling compactions & flushes 2023-07-18 12:14:45,838 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:45,838 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:45,838 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. after waiting 0 ms 2023-07-18 12:14:45,838 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:45,838 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:45,838 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 200a6017b0aa493242a0b27c624a2a96: 2023-07-18 12:14:45,839 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d59b998b9371efcbe3070efc0f8ffe90, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:45,878 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:45,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d59b998b9371efcbe3070efc0f8ffe90, disabling compactions & flushes 2023-07-18 12:14:45,879 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:45,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:45,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. after waiting 0 ms 2023-07-18 12:14:45,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:45,879 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:45,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d59b998b9371efcbe3070efc0f8ffe90: 2023-07-18 12:14:45,879 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => aa9497048283832ce04b2abd6d971dd3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:45,915 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:45,915 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing aa9497048283832ce04b2abd6d971dd3, disabling compactions & flushes 2023-07-18 12:14:45,915 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:45,915 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:45,915 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. after waiting 0 ms 2023-07-18 12:14:45,915 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:45,915 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:45,916 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for aa9497048283832ce04b2abd6d971dd3: 2023-07-18 12:14:46,231 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,232 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 69c42c802eb19b3e18523b4f8abd3885, disabling compactions & flushes 2023-07-18 12:14:46,232 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,232 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,232 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. after waiting 0 ms 2023-07-18 12:14:46,232 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,232 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,233 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 69c42c802eb19b3e18523b4f8abd3885: 2023-07-18 12:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing da451990537b4adcd2f77ee99d13a424, disabling compactions & flushes 2023-07-18 12:14:46,234 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. after waiting 0 ms 2023-07-18 12:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,234 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,235 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for da451990537b4adcd2f77ee99d13a424: 2023-07-18 12:14:46,242 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:46,244 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682486243"}]},"ts":"1689682486243"} 2023-07-18 12:14:46,244 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682486243"}]},"ts":"1689682486243"} 2023-07-18 12:14:46,244 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682486243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682486243"}]},"ts":"1689682486243"} 2023-07-18 12:14:46,244 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682486243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682486243"}]},"ts":"1689682486243"} 2023-07-18 12:14:46,244 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486243"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682486243"}]},"ts":"1689682486243"} 2023-07-18 12:14:46,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:14:46,328 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 12:14:46,335 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:46,336 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682486335"}]},"ts":"1689682486335"} 2023-07-18 12:14:46,340 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 12:14:46,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:46,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:46,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:46,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:46,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, ASSIGN}] 2023-07-18 12:14:46,358 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, ASSIGN 2023-07-18 12:14:46,358 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, ASSIGN 2023-07-18 12:14:46,360 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, ASSIGN 2023-07-18 12:14:46,360 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, ASSIGN 2023-07-18 12:14:46,361 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:46,362 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, ASSIGN 2023-07-18 12:14:46,362 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:14:46,362 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:14:46,363 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:46,364 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:14:46,512 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 12:14:46,516 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:46,516 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:46,516 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:46,516 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:46,516 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:46,517 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682486516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682486516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682486516"}]},"ts":"1689682486516"} 2023-07-18 12:14:46,517 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682486516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682486516"}]},"ts":"1689682486516"} 2023-07-18 12:14:46,517 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682486516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682486516"}]},"ts":"1689682486516"} 2023-07-18 12:14:46,517 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682486516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682486516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682486516"}]},"ts":"1689682486516"} 2023-07-18 12:14:46,517 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682486516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682486516"}]},"ts":"1689682486516"} 2023-07-18 12:14:46,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE; OpenRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:46,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=19, state=RUNNABLE; OpenRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:46,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=18, state=RUNNABLE; OpenRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:46,524 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=16, state=RUNNABLE; OpenRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:46,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=17, state=RUNNABLE; OpenRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:46,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:46,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa9497048283832ce04b2abd6d971dd3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 12:14:46,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:46,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d59b998b9371efcbe3070efc0f8ffe90, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 12:14:46,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,703 INFO [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,707 INFO [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,709 DEBUG [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/f 2023-07-18 12:14:46,709 DEBUG [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/f 2023-07-18 12:14:46,710 INFO [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d59b998b9371efcbe3070efc0f8ffe90 columnFamilyName f 2023-07-18 12:14:46,711 INFO [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] regionserver.HStore(310): Store=d59b998b9371efcbe3070efc0f8ffe90/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:46,712 DEBUG [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/f 2023-07-18 12:14:46,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,713 DEBUG [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/f 2023-07-18 12:14:46,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,715 INFO [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa9497048283832ce04b2abd6d971dd3 columnFamilyName f 2023-07-18 12:14:46,715 INFO [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] regionserver.HStore(310): Store=aa9497048283832ce04b2abd6d971dd3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:46,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:46,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:46,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:46,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:46,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d59b998b9371efcbe3070efc0f8ffe90; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11167902400, jitterRate=0.04009196162223816}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:46,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d59b998b9371efcbe3070efc0f8ffe90: 2023-07-18 12:14:46,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa9497048283832ce04b2abd6d971dd3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10269131840, jitterRate=-0.04361256957054138}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:46,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa9497048283832ce04b2abd6d971dd3: 2023-07-18 12:14:46,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3., pid=21, masterSystemTime=1689682486678 2023-07-18 12:14:46,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90., pid=22, masterSystemTime=1689682486678 2023-07-18 12:14:46,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:46,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:46,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:46,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 200a6017b0aa493242a0b27c624a2a96, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 12:14:46,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,734 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:46,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:46,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:46,734 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682486734"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682486734"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682486734"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682486734"}]},"ts":"1689682486734"} 2023-07-18 12:14:46,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69c42c802eb19b3e18523b4f8abd3885, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 12:14:46,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,736 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:46,737 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486736"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682486736"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682486736"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682486736"}]},"ts":"1689682486736"} 2023-07-18 12:14:46,738 INFO [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 12:14:46,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; OpenRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,44567,1689682483625 in 220 msec 2023-07-18 12:14:46,744 DEBUG [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/f 2023-07-18 12:14:46,745 DEBUG [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/f 2023-07-18 12:14:46,746 INFO [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 200a6017b0aa493242a0b27c624a2a96 columnFamilyName f 2023-07-18 12:14:46,744 INFO [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,749 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, ASSIGN in 388 msec 2023-07-18 12:14:46,751 DEBUG [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/f 2023-07-18 12:14:46,751 DEBUG [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/f 2023-07-18 12:14:46,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=19 2023-07-18 12:14:46,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=19, state=SUCCESS; OpenRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,44601,1689682479947 in 224 msec 2023-07-18 12:14:46,752 INFO [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69c42c802eb19b3e18523b4f8abd3885 columnFamilyName f 2023-07-18 12:14:46,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, ASSIGN in 396 msec 2023-07-18 12:14:46,754 INFO [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] regionserver.HStore(310): Store=200a6017b0aa493242a0b27c624a2a96/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:46,756 INFO [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] regionserver.HStore(310): Store=69c42c802eb19b3e18523b4f8abd3885/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:46,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:46,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:46,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:46,771 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 69c42c802eb19b3e18523b4f8abd3885; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10147209920, jitterRate=-0.05496743321418762}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:46,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 69c42c802eb19b3e18523b4f8abd3885: 2023-07-18 12:14:46,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:46,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885., pid=24, masterSystemTime=1689682486678 2023-07-18 12:14:46,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 200a6017b0aa493242a0b27c624a2a96; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11783552320, jitterRate=0.0974288284778595}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:46,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 200a6017b0aa493242a0b27c624a2a96: 2023-07-18 12:14:46,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:46,775 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:46,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96., pid=23, masterSystemTime=1689682486678 2023-07-18 12:14:46,775 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682486775"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682486775"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682486775"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682486775"}]},"ts":"1689682486775"} 2023-07-18 12:14:46,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:46,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:46,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da451990537b4adcd2f77ee99d13a424, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 12:14:46,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:46,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,779 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:46,780 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486779"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682486779"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682486779"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682486779"}]},"ts":"1689682486779"} 2023-07-18 12:14:46,781 INFO [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,782 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=16 2023-07-18 12:14:46,783 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=16, state=SUCCESS; OpenRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,44601,1689682479947 in 254 msec 2023-07-18 12:14:46,785 DEBUG [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/f 2023-07-18 12:14:46,785 DEBUG [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/f 2023-07-18 12:14:46,786 INFO [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da451990537b4adcd2f77ee99d13a424 columnFamilyName f 2023-07-18 12:14:46,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, ASSIGN in 428 msec 2023-07-18 12:14:46,787 INFO [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] regionserver.HStore(310): Store=da451990537b4adcd2f77ee99d13a424/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:46,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=18 2023-07-18 12:14:46,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; OpenRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,44567,1689682483625 in 263 msec 2023-07-18 12:14:46,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,791 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, ASSIGN in 434 msec 2023-07-18 12:14:46,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:46,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:46,801 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da451990537b4adcd2f77ee99d13a424; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11342893920, jitterRate=0.05638931691646576}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:46,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da451990537b4adcd2f77ee99d13a424: 2023-07-18 12:14:46,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424., pid=25, masterSystemTime=1689682486678 2023-07-18 12:14:46,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:46,814 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:46,814 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682486813"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682486813"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682486813"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682486813"}]},"ts":"1689682486813"} 2023-07-18 12:14:46,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=17 2023-07-18 12:14:46,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=17, state=SUCCESS; OpenRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,44567,1689682483625 in 294 msec 2023-07-18 12:14:46,832 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-18 12:14:46,834 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, ASSIGN in 468 msec 2023-07-18 12:14:46,836 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:46,836 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682486836"}]},"ts":"1689682486836"} 2023-07-18 12:14:46,839 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 12:14:46,842 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:46,845 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.6580 sec 2023-07-18 12:14:47,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:14:47,332 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-18 12:14:47,333 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-18 12:14:47,334 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:47,335 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35237] ipc.CallRunner(144): callId: 51 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:55524 deadline: 1689682547335, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44567 startCode=1689682483625. As of locationSeqNum=14. 2023-07-18 12:14:47,440 DEBUG [hconnection-0x497c82a-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:47,444 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36992, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:47,463 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-18 12:14:47,464 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:47,464 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-18 12:14:47,465 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:47,469 DEBUG [Listener at localhost/37687] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:47,480 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55552, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:47,483 DEBUG [Listener at localhost/37687] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:47,489 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47938, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:47,490 DEBUG [Listener at localhost/37687] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:47,495 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37004, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:47,497 DEBUG [Listener at localhost/37687] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:47,501 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43372, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:47,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:47,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:14:47,515 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:47,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:47,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:47,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 69c42c802eb19b3e18523b4f8abd3885 to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:47,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:47,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:47,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:47,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:47,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, REOPEN/MOVE 2023-07-18 12:14:47,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region da451990537b4adcd2f77ee99d13a424 to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,537 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, REOPEN/MOVE 2023-07-18 12:14:47,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:47,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:47,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:47,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:47,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:47,538 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:47,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, REOPEN/MOVE 2023-07-18 12:14:47,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 200a6017b0aa493242a0b27c624a2a96 to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,538 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682487538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487538"}]},"ts":"1689682487538"} 2023-07-18 12:14:47,539 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, REOPEN/MOVE 2023-07-18 12:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:47,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:47,541 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:47,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, REOPEN/MOVE 2023-07-18 12:14:47,541 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487541"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487541"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487541"}]},"ts":"1689682487541"} 2023-07-18 12:14:47,542 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, REOPEN/MOVE 2023-07-18 12:14:47,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region d59b998b9371efcbe3070efc0f8ffe90 to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:47,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:47,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:47,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:47,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:47,543 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:47,544 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:47,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, REOPEN/MOVE 2023-07-18 12:14:47,544 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487544"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487544"}]},"ts":"1689682487544"} 2023-07-18 12:14:47,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region aa9497048283832ce04b2abd6d971dd3 to RSGroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:47,545 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, REOPEN/MOVE 2023-07-18 12:14:47,545 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:47,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:47,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:47,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:47,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:47,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:47,547 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:47,548 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:47,548 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487548"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487548"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487548"}]},"ts":"1689682487548"} 2023-07-18 12:14:47,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, REOPEN/MOVE 2023-07-18 12:14:47,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1982584964, current retry=0 2023-07-18 12:14:47,549 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, REOPEN/MOVE 2023-07-18 12:14:47,551 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:47,551 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:14:47,551 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682487551"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487551"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487551"}]},"ts":"1689682487551"} 2023-07-18 12:14:47,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:47,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:47,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d59b998b9371efcbe3070efc0f8ffe90, disabling compactions & flushes 2023-07-18 12:14:47,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:47,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:47,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. after waiting 0 ms 2023-07-18 12:14:47,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:47,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:47,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 200a6017b0aa493242a0b27c624a2a96, disabling compactions & flushes 2023-07-18 12:14:47,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:47,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:47,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. after waiting 0 ms 2023-07-18 12:14:47,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:47,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:47,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:47,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:47,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d59b998b9371efcbe3070efc0f8ffe90: 2023-07-18 12:14:47,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d59b998b9371efcbe3070efc0f8ffe90 move to jenkins-hbase4.apache.org,41985,1689682479721 record at close sequenceid=2 2023-07-18 12:14:47,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:47,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 200a6017b0aa493242a0b27c624a2a96: 2023-07-18 12:14:47,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 200a6017b0aa493242a0b27c624a2a96 move to jenkins-hbase4.apache.org,35237,1689682479509 record at close sequenceid=2 2023-07-18 12:14:47,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:47,717 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 12:14:47,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:47,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 69c42c802eb19b3e18523b4f8abd3885, disabling compactions & flushes 2023-07-18 12:14:47,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:47,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:47,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. after waiting 0 ms 2023-07-18 12:14:47,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:47,718 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=CLOSED 2023-07-18 12:14:47,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682487718"}]},"ts":"1689682487718"} 2023-07-18 12:14:47,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:47,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:47,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa9497048283832ce04b2abd6d971dd3, disabling compactions & flushes 2023-07-18 12:14:47,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:47,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:47,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. after waiting 0 ms 2023-07-18 12:14:47,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:47,724 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=CLOSED 2023-07-18 12:14:47,725 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487724"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682487724"}]},"ts":"1689682487724"} 2023-07-18 12:14:47,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:47,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:47,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 69c42c802eb19b3e18523b4f8abd3885: 2023-07-18 12:14:47,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-18 12:14:47,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 69c42c802eb19b3e18523b4f8abd3885 move to jenkins-hbase4.apache.org,35237,1689682479509 record at close sequenceid=2 2023-07-18 12:14:47,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,44601,1689682479947 in 174 msec 2023-07-18 12:14:47,733 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-18 12:14:47,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:47,734 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,44567,1689682483625 in 180 msec 2023-07-18 12:14:47,735 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:47,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:47,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa9497048283832ce04b2abd6d971dd3: 2023-07-18 12:14:47,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aa9497048283832ce04b2abd6d971dd3 move to jenkins-hbase4.apache.org,41985,1689682479721 record at close sequenceid=2 2023-07-18 12:14:47,736 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:47,737 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=CLOSED 2023-07-18 12:14:47,738 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682487737"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682487737"}]},"ts":"1689682487737"} 2023-07-18 12:14:47,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:47,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:47,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:47,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da451990537b4adcd2f77ee99d13a424, disabling compactions & flushes 2023-07-18 12:14:47,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:47,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:47,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. after waiting 0 ms 2023-07-18 12:14:47,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:47,740 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=CLOSED 2023-07-18 12:14:47,740 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682487740"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682487740"}]},"ts":"1689682487740"} 2023-07-18 12:14:47,744 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-18 12:14:47,744 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,44601,1689682479947 in 198 msec 2023-07-18 12:14:47,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-18 12:14:47,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,44567,1689682483625 in 187 msec 2023-07-18 12:14:47,745 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:47,757 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:47,768 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:47,769 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:47,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da451990537b4adcd2f77ee99d13a424: 2023-07-18 12:14:47,769 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding da451990537b4adcd2f77ee99d13a424 move to jenkins-hbase4.apache.org,41985,1689682479721 record at close sequenceid=2 2023-07-18 12:14:47,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:47,773 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=CLOSED 2023-07-18 12:14:47,773 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487772"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682487772"}]},"ts":"1689682487772"} 2023-07-18 12:14:47,778 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-18 12:14:47,778 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,44567,1689682483625 in 230 msec 2023-07-18 12:14:47,779 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:47,819 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 12:14:47,820 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 12:14:47,820 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:14:47,821 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 12:14:47,821 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 12:14:47,821 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 12:14:47,822 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 12:14:47,823 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 12:14:47,885 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 12:14:47,885 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:47,885 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:47,885 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:47,886 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682487885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487885"}]},"ts":"1689682487885"} 2023-07-18 12:14:47,885 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:47,885 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:47,886 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487885"}]},"ts":"1689682487885"} 2023-07-18 12:14:47,886 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487885"}]},"ts":"1689682487885"} 2023-07-18 12:14:47,886 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682487885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487885"}]},"ts":"1689682487885"} 2023-07-18 12:14:47,886 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682487885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682487885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682487885"}]},"ts":"1689682487885"} 2023-07-18 12:14:47,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:47,889 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:47,891 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=27, state=RUNNABLE; OpenRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:47,892 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=26, state=RUNNABLE; OpenRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:47,895 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=30, state=RUNNABLE; OpenRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:48,041 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,041 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:14:48,044 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47948, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:14:48,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 200a6017b0aa493242a0b27c624a2a96, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da451990537b4adcd2f77ee99d13a424, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:48,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,052 INFO [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,052 INFO [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,054 DEBUG [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/f 2023-07-18 12:14:48,054 DEBUG [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/f 2023-07-18 12:14:48,057 INFO [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 200a6017b0aa493242a0b27c624a2a96 columnFamilyName f 2023-07-18 12:14:48,057 DEBUG [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/f 2023-07-18 12:14:48,057 DEBUG [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/f 2023-07-18 12:14:48,058 INFO [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da451990537b4adcd2f77ee99d13a424 columnFamilyName f 2023-07-18 12:14:48,058 INFO [StoreOpener-200a6017b0aa493242a0b27c624a2a96-1] regionserver.HStore(310): Store=200a6017b0aa493242a0b27c624a2a96/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:48,059 INFO [StoreOpener-da451990537b4adcd2f77ee99d13a424-1] regionserver.HStore(310): Store=da451990537b4adcd2f77ee99d13a424/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:48,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da451990537b4adcd2f77ee99d13a424; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10431194880, jitterRate=-0.028519272804260254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:48,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da451990537b4adcd2f77ee99d13a424: 2023-07-18 12:14:48,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 200a6017b0aa493242a0b27c624a2a96; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11584786880, jitterRate=0.07891735434532166}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:48,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 200a6017b0aa493242a0b27c624a2a96: 2023-07-18 12:14:48,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96., pid=37, masterSystemTime=1689682488042 2023-07-18 12:14:48,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424., pid=38, masterSystemTime=1689682488041 2023-07-18 12:14:48,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,080 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,080 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69c42c802eb19b3e18523b4f8abd3885, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 12:14:48,080 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:48,080 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488080"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682488080"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682488080"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682488080"}]},"ts":"1689682488080"} 2023-07-18 12:14:48,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:48,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,085 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,085 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488084"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682488084"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682488084"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682488084"}]},"ts":"1689682488084"} 2023-07-18 12:14:48,086 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,086 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa9497048283832ce04b2abd6d971dd3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 12:14:48,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:48,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,088 INFO [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,090 DEBUG [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/f 2023-07-18 12:14:48,090 DEBUG [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/f 2023-07-18 12:14:48,090 INFO [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69c42c802eb19b3e18523b4f8abd3885 columnFamilyName f 2023-07-18 12:14:48,090 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-18 12:14:48,091 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,35237,1689682479509 in 198 msec 2023-07-18 12:14:48,091 INFO [StoreOpener-69c42c802eb19b3e18523b4f8abd3885-1] regionserver.HStore(310): Store=69c42c802eb19b3e18523b4f8abd3885/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:48,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=27 2023-07-18 12:14:48,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=27, state=SUCCESS; OpenRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,41985,1689682479721 in 197 msec 2023-07-18 12:14:48,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, REOPEN/MOVE in 552 msec 2023-07-18 12:14:48,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, REOPEN/MOVE in 555 msec 2023-07-18 12:14:48,095 INFO [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,096 DEBUG [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/f 2023-07-18 12:14:48,096 DEBUG [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/f 2023-07-18 12:14:48,097 INFO [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa9497048283832ce04b2abd6d971dd3 columnFamilyName f 2023-07-18 12:14:48,097 INFO [StoreOpener-aa9497048283832ce04b2abd6d971dd3-1] regionserver.HStore(310): Store=aa9497048283832ce04b2abd6d971dd3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:48,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 69c42c802eb19b3e18523b4f8abd3885; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9512448800, jitterRate=-0.1140841692686081}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:48,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 69c42c802eb19b3e18523b4f8abd3885: 2023-07-18 12:14:48,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885., pid=39, masterSystemTime=1689682488042 2023-07-18 12:14:48,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa9497048283832ce04b2abd6d971dd3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10711300800, jitterRate=-0.0024323761463165283}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:48,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa9497048283832ce04b2abd6d971dd3: 2023-07-18 12:14:48,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,109 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3., pid=36, masterSystemTime=1689682488041 2023-07-18 12:14:48,110 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:48,110 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682488109"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682488109"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682488109"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682488109"}]},"ts":"1689682488109"} 2023-07-18 12:14:48,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d59b998b9371efcbe3070efc0f8ffe90, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 12:14:48,113 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,113 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682488113"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682488113"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682488113"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682488113"}]},"ts":"1689682488113"} 2023-07-18 12:14:48,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:48,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,116 INFO [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,117 DEBUG [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/f 2023-07-18 12:14:48,117 DEBUG [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/f 2023-07-18 12:14:48,118 INFO [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d59b998b9371efcbe3070efc0f8ffe90 columnFamilyName f 2023-07-18 12:14:48,119 INFO [StoreOpener-d59b998b9371efcbe3070efc0f8ffe90-1] regionserver.HStore(310): Store=d59b998b9371efcbe3070efc0f8ffe90/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:48,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,130 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d59b998b9371efcbe3070efc0f8ffe90; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11963829760, jitterRate=0.11421847343444824}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:48,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d59b998b9371efcbe3070efc0f8ffe90: 2023-07-18 12:14:48,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90., pid=40, masterSystemTime=1689682488041 2023-07-18 12:14:48,132 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=26 2023-07-18 12:14:48,132 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=26, state=SUCCESS; OpenRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,35237,1689682479509 in 235 msec 2023-07-18 12:14:48,134 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-18 12:14:48,134 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,41985,1689682479721 in 241 msec 2023-07-18 12:14:48,135 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, REOPEN/MOVE in 598 msec 2023-07-18 12:14:48,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,137 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,137 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488137"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682488137"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682488137"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682488137"}]},"ts":"1689682488137"} 2023-07-18 12:14:48,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, REOPEN/MOVE in 588 msec 2023-07-18 12:14:48,138 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,143 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=30 2023-07-18 12:14:48,143 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=30, state=SUCCESS; OpenRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,41985,1689682479721 in 245 msec 2023-07-18 12:14:48,145 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, REOPEN/MOVE in 600 msec 2023-07-18 12:14:48,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-18 12:14:48,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1982584964. 2023-07-18 12:14:48,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:48,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:48,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:48,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:48,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:14:48,558 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:48,566 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:48,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:48,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:48,584 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682488584"}]},"ts":"1689682488584"} 2023-07-18 12:14:48,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 12:14:48,587 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 12:14:48,589 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 12:14:48,591 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, UNASSIGN}] 2023-07-18 12:14:48,594 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, UNASSIGN 2023-07-18 12:14:48,594 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, UNASSIGN 2023-07-18 12:14:48,595 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, UNASSIGN 2023-07-18 12:14:48,595 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, UNASSIGN 2023-07-18 12:14:48,595 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, UNASSIGN 2023-07-18 12:14:48,596 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,596 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,596 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682488596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682488596"}]},"ts":"1689682488596"} 2023-07-18 12:14:48,596 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:48,596 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682488596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682488596"}]},"ts":"1689682488596"} 2023-07-18 12:14:48,597 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682488596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682488596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682488596"}]},"ts":"1689682488596"} 2023-07-18 12:14:48,597 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:48,597 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488597"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682488597"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682488597"}]},"ts":"1689682488597"} 2023-07-18 12:14:48,598 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:48,598 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682488598"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682488598"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682488598"}]},"ts":"1689682488598"} 2023-07-18 12:14:48,600 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:48,601 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=45, state=RUNNABLE; CloseRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:48,603 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=42, state=RUNNABLE; CloseRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:48,604 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=44, state=RUNNABLE; CloseRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:48,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:48,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 12:14:48,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da451990537b4adcd2f77ee99d13a424, disabling compactions & flushes 2023-07-18 12:14:48,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. after waiting 0 ms 2023-07-18 12:14:48,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 69c42c802eb19b3e18523b4f8abd3885, disabling compactions & flushes 2023-07-18 12:14:48,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. after waiting 0 ms 2023-07-18 12:14:48,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:48,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424. 2023-07-18 12:14:48,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da451990537b4adcd2f77ee99d13a424: 2023-07-18 12:14:48,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:48,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885. 2023-07-18 12:14:48,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 69c42c802eb19b3e18523b4f8abd3885: 2023-07-18 12:14:48,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:48,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d59b998b9371efcbe3070efc0f8ffe90, disabling compactions & flushes 2023-07-18 12:14:48,782 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. after waiting 0 ms 2023-07-18 12:14:48,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,783 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=da451990537b4adcd2f77ee99d13a424, regionState=CLOSED 2023-07-18 12:14:48,783 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488783"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682488783"}]},"ts":"1689682488783"} 2023-07-18 12:14:48,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:48,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 200a6017b0aa493242a0b27c624a2a96, disabling compactions & flushes 2023-07-18 12:14:48,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. after waiting 0 ms 2023-07-18 12:14:48,786 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,787 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=69c42c802eb19b3e18523b4f8abd3885, regionState=CLOSED 2023-07-18 12:14:48,787 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682488787"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682488787"}]},"ts":"1689682488787"} 2023-07-18 12:14:48,794 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-18 12:14:48,794 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure da451990537b4adcd2f77ee99d13a424, server=jenkins-hbase4.apache.org,41985,1689682479721 in 187 msec 2023-07-18 12:14:48,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=42 2023-07-18 12:14:48,796 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=da451990537b4adcd2f77ee99d13a424, UNASSIGN in 203 msec 2023-07-18 12:14:48,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=42, state=SUCCESS; CloseRegionProcedure 69c42c802eb19b3e18523b4f8abd3885, server=jenkins-hbase4.apache.org,35237,1689682479509 in 188 msec 2023-07-18 12:14:48,798 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69c42c802eb19b3e18523b4f8abd3885, UNASSIGN in 205 msec 2023-07-18 12:14:48,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:48,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:48,813 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90. 2023-07-18 12:14:48,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d59b998b9371efcbe3070efc0f8ffe90: 2023-07-18 12:14:48,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96. 2023-07-18 12:14:48,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 200a6017b0aa493242a0b27c624a2a96: 2023-07-18 12:14:48,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:48,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa9497048283832ce04b2abd6d971dd3, disabling compactions & flushes 2023-07-18 12:14:48,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. after waiting 0 ms 2023-07-18 12:14:48,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,819 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=d59b998b9371efcbe3070efc0f8ffe90, regionState=CLOSED 2023-07-18 12:14:48,819 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488818"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682488818"}]},"ts":"1689682488818"} 2023-07-18 12:14:48,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:48,820 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=200a6017b0aa493242a0b27c624a2a96, regionState=CLOSED 2023-07-18 12:14:48,820 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682488820"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682488820"}]},"ts":"1689682488820"} 2023-07-18 12:14:48,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=45 2023-07-18 12:14:48,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; CloseRegionProcedure d59b998b9371efcbe3070efc0f8ffe90, server=jenkins-hbase4.apache.org,41985,1689682479721 in 221 msec 2023-07-18 12:14:48,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=44 2023-07-18 12:14:48,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=44, state=SUCCESS; CloseRegionProcedure 200a6017b0aa493242a0b27c624a2a96, server=jenkins-hbase4.apache.org,35237,1689682479509 in 219 msec 2023-07-18 12:14:48,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d59b998b9371efcbe3070efc0f8ffe90, UNASSIGN in 234 msec 2023-07-18 12:14:48,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=200a6017b0aa493242a0b27c624a2a96, UNASSIGN in 260 msec 2023-07-18 12:14:48,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:48,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3. 2023-07-18 12:14:48,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa9497048283832ce04b2abd6d971dd3: 2023-07-18 12:14:48,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:48,860 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=aa9497048283832ce04b2abd6d971dd3, regionState=CLOSED 2023-07-18 12:14:48,860 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682488860"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682488860"}]},"ts":"1689682488860"} 2023-07-18 12:14:48,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-18 12:14:48,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure aa9497048283832ce04b2abd6d971dd3, server=jenkins-hbase4.apache.org,41985,1689682479721 in 257 msec 2023-07-18 12:14:48,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-18 12:14:48,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aa9497048283832ce04b2abd6d971dd3, UNASSIGN in 280 msec 2023-07-18 12:14:48,877 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682488877"}]},"ts":"1689682488877"} 2023-07-18 12:14:48,883 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 12:14:48,886 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 12:14:48,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 12:14:48,898 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 321 msec 2023-07-18 12:14:49,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 12:14:49,191 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-18 12:14:49,192 INFO [Listener at localhost/37687] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:49,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:49,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-18 12:14:49,208 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-18 12:14:49,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 12:14:49,223 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:49,223 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:49,223 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:49,223 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:49,223 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:49,229 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/recovered.edits] 2023-07-18 12:14:49,229 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/recovered.edits] 2023-07-18 12:14:49,231 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/recovered.edits] 2023-07-18 12:14:49,233 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/recovered.edits] 2023-07-18 12:14:49,234 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/recovered.edits] 2023-07-18 12:14:49,247 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96/recovered.edits/7.seqid 2023-07-18 12:14:49,249 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90/recovered.edits/7.seqid 2023-07-18 12:14:49,249 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/200a6017b0aa493242a0b27c624a2a96 2023-07-18 12:14:49,250 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3/recovered.edits/7.seqid 2023-07-18 12:14:49,251 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d59b998b9371efcbe3070efc0f8ffe90 2023-07-18 12:14:49,252 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aa9497048283832ce04b2abd6d971dd3 2023-07-18 12:14:49,253 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885/recovered.edits/7.seqid 2023-07-18 12:14:49,254 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69c42c802eb19b3e18523b4f8abd3885 2023-07-18 12:14:49,254 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424/recovered.edits/7.seqid 2023-07-18 12:14:49,255 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/da451990537b4adcd2f77ee99d13a424 2023-07-18 12:14:49,255 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 12:14:49,292 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 12:14:49,297 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 12:14:49,298 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 12:14:49,298 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682489298"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:49,298 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682489298"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:49,298 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682489298"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:49,298 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682489298"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:49,298 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682489298"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:49,302 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 12:14:49,303 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 69c42c802eb19b3e18523b4f8abd3885, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682485181.69c42c802eb19b3e18523b4f8abd3885.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => da451990537b4adcd2f77ee99d13a424, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682485181.da451990537b4adcd2f77ee99d13a424.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 200a6017b0aa493242a0b27c624a2a96, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682485181.200a6017b0aa493242a0b27c624a2a96.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => d59b998b9371efcbe3070efc0f8ffe90, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682485181.d59b998b9371efcbe3070efc0f8ffe90.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => aa9497048283832ce04b2abd6d971dd3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682485181.aa9497048283832ce04b2abd6d971dd3.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 12:14:49,303 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 12:14:49,303 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682489303"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:49,309 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 12:14:49,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 12:14:49,322 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:49,322 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:49,322 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:49,322 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:49,322 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e 2023-07-18 12:14:49,323 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e empty. 2023-07-18 12:14:49,323 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 empty. 2023-07-18 12:14:49,323 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed empty. 2023-07-18 12:14:49,323 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 empty. 2023-07-18 12:14:49,324 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 empty. 2023-07-18 12:14:49,324 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e 2023-07-18 12:14:49,325 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:49,325 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:49,325 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:49,325 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:49,325 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 12:14:49,361 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:49,362 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 957d6d25fe63f11ee60426f814ac18a9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:49,363 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 414c3efdf1678e75084555124e94657e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:49,363 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9d851e4d0be73eb6e035b6b6d1f404c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:49,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:49,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 957d6d25fe63f11ee60426f814ac18a9, disabling compactions & flushes 2023-07-18 12:14:49,408 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:49,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:49,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. after waiting 0 ms 2023-07-18 12:14:49,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:49,408 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:49,408 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 957d6d25fe63f11ee60426f814ac18a9: 2023-07-18 12:14:49,409 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b4f7304630b3d76a09c4679770272ad3, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:49,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:49,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 414c3efdf1678e75084555124e94657e, disabling compactions & flushes 2023-07-18 12:14:49,412 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:49,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:49,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. after waiting 0 ms 2023-07-18 12:14:49,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:49,412 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:49,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 414c3efdf1678e75084555124e94657e: 2023-07-18 12:14:49,413 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e40fcd59566f3a52877ff44805a039ed, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e40fcd59566f3a52877ff44805a039ed, disabling compactions & flushes 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b4f7304630b3d76a09c4679770272ad3, disabling compactions & flushes 2023-07-18 12:14:49,433 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:49,433 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:49,433 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. after waiting 0 ms 2023-07-18 12:14:49,434 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. after waiting 0 ms 2023-07-18 12:14:49,434 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:49,434 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:49,434 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:49,434 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:49,434 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e40fcd59566f3a52877ff44805a039ed: 2023-07-18 12:14:49,434 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b4f7304630b3d76a09c4679770272ad3: 2023-07-18 12:14:49,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 12:14:49,811 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:49,811 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 9d851e4d0be73eb6e035b6b6d1f404c7, disabling compactions & flushes 2023-07-18 12:14:49,811 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:49,811 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:49,812 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. after waiting 0 ms 2023-07-18 12:14:49,812 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:49,812 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:49,812 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 9d851e4d0be73eb6e035b6b6d1f404c7: 2023-07-18 12:14:49,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 12:14:49,817 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682489817"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682489817"}]},"ts":"1689682489817"} 2023-07-18 12:14:49,818 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682489257.414c3efdf1678e75084555124e94657e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682489817"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682489817"}]},"ts":"1689682489817"} 2023-07-18 12:14:49,818 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682489817"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682489817"}]},"ts":"1689682489817"} 2023-07-18 12:14:49,818 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682489817"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682489817"}]},"ts":"1689682489817"} 2023-07-18 12:14:49,818 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682489817"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682489817"}]},"ts":"1689682489817"} 2023-07-18 12:14:49,821 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 12:14:49,823 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682489822"}]},"ts":"1689682489822"} 2023-07-18 12:14:49,824 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 12:14:49,829 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:49,830 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:49,830 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:49,830 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:49,833 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, ASSIGN}] 2023-07-18 12:14:49,835 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, ASSIGN 2023-07-18 12:14:49,835 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, ASSIGN 2023-07-18 12:14:49,836 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, ASSIGN 2023-07-18 12:14:49,836 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, ASSIGN 2023-07-18 12:14:49,836 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:49,837 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, ASSIGN 2023-07-18 12:14:49,837 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:49,837 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:49,838 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:49,839 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:49,987 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 12:14:49,990 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=e40fcd59566f3a52877ff44805a039ed, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:49,990 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9d851e4d0be73eb6e035b6b6d1f404c7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:49,990 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=414c3efdf1678e75084555124e94657e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:49,990 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=957d6d25fe63f11ee60426f814ac18a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:49,991 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682489257.414c3efdf1678e75084555124e94657e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682489990"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682489990"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682489990"}]},"ts":"1689682489990"} 2023-07-18 12:14:49,990 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=b4f7304630b3d76a09c4679770272ad3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:49,991 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682489990"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682489990"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682489990"}]},"ts":"1689682489990"} 2023-07-18 12:14:49,991 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682489990"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682489990"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682489990"}]},"ts":"1689682489990"} 2023-07-18 12:14:49,991 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682489990"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682489990"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682489990"}]},"ts":"1689682489990"} 2023-07-18 12:14:49,990 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682489990"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682489990"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682489990"}]},"ts":"1689682489990"} 2023-07-18 12:14:49,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=55, state=RUNNABLE; OpenRegionProcedure 414c3efdf1678e75084555124e94657e, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:49,994 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=53, state=RUNNABLE; OpenRegionProcedure 957d6d25fe63f11ee60426f814ac18a9, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:49,995 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=56, state=RUNNABLE; OpenRegionProcedure b4f7304630b3d76a09c4679770272ad3, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:49,997 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=54, state=RUNNABLE; OpenRegionProcedure 9d851e4d0be73eb6e035b6b6d1f404c7, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:49,998 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; OpenRegionProcedure e40fcd59566f3a52877ff44805a039ed, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:50,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b4f7304630b3d76a09c4679770272ad3, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 12:14:50,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e40fcd59566f3a52877ff44805a039ed, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 12:14:50,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:50,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:50,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,157 INFO [StoreOpener-b4f7304630b3d76a09c4679770272ad3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,157 INFO [StoreOpener-e40fcd59566f3a52877ff44805a039ed-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,159 DEBUG [StoreOpener-e40fcd59566f3a52877ff44805a039ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/f 2023-07-18 12:14:50,159 DEBUG [StoreOpener-e40fcd59566f3a52877ff44805a039ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/f 2023-07-18 12:14:50,160 INFO [StoreOpener-e40fcd59566f3a52877ff44805a039ed-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e40fcd59566f3a52877ff44805a039ed columnFamilyName f 2023-07-18 12:14:50,160 DEBUG [StoreOpener-b4f7304630b3d76a09c4679770272ad3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/f 2023-07-18 12:14:50,160 DEBUG [StoreOpener-b4f7304630b3d76a09c4679770272ad3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/f 2023-07-18 12:14:50,161 INFO [StoreOpener-e40fcd59566f3a52877ff44805a039ed-1] regionserver.HStore(310): Store=e40fcd59566f3a52877ff44805a039ed/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:50,163 INFO [StoreOpener-b4f7304630b3d76a09c4679770272ad3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b4f7304630b3d76a09c4679770272ad3 columnFamilyName f 2023-07-18 12:14:50,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,167 INFO [StoreOpener-b4f7304630b3d76a09c4679770272ad3-1] regionserver.HStore(310): Store=b4f7304630b3d76a09c4679770272ad3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:50,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:50,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:50,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e40fcd59566f3a52877ff44805a039ed; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11003364800, jitterRate=0.024768203496932983}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:50,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e40fcd59566f3a52877ff44805a039ed: 2023-07-18 12:14:50,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed., pid=62, masterSystemTime=1689682490149 2023-07-18 12:14:50,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b4f7304630b3d76a09c4679770272ad3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11470128960, jitterRate=0.0682390034198761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:50,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b4f7304630b3d76a09c4679770272ad3: 2023-07-18 12:14:50,194 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3., pid=60, masterSystemTime=1689682490146 2023-07-18 12:14:50,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,195 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=e40fcd59566f3a52877ff44805a039ed, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:50,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 957d6d25fe63f11ee60426f814ac18a9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 12:14:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,196 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682490195"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682490195"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682490195"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682490195"}]},"ts":"1689682490195"} 2023-07-18 12:14:50,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 414c3efdf1678e75084555124e94657e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 12:14:50,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:50,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,199 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=b4f7304630b3d76a09c4679770272ad3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:50,199 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490198"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682490198"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682490198"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682490198"}]},"ts":"1689682490198"} 2023-07-18 12:14:50,201 INFO [StoreOpener-957d6d25fe63f11ee60426f814ac18a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,203 INFO [StoreOpener-414c3efdf1678e75084555124e94657e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,206 DEBUG [StoreOpener-957d6d25fe63f11ee60426f814ac18a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/f 2023-07-18 12:14:50,206 DEBUG [StoreOpener-957d6d25fe63f11ee60426f814ac18a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/f 2023-07-18 12:14:50,206 INFO [StoreOpener-957d6d25fe63f11ee60426f814ac18a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 957d6d25fe63f11ee60426f814ac18a9 columnFamilyName f 2023-07-18 12:14:50,207 DEBUG [StoreOpener-414c3efdf1678e75084555124e94657e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/f 2023-07-18 12:14:50,207 DEBUG [StoreOpener-414c3efdf1678e75084555124e94657e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/f 2023-07-18 12:14:50,207 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-18 12:14:50,207 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; OpenRegionProcedure e40fcd59566f3a52877ff44805a039ed, server=jenkins-hbase4.apache.org,41985,1689682479721 in 201 msec 2023-07-18 12:14:50,207 INFO [StoreOpener-414c3efdf1678e75084555124e94657e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 414c3efdf1678e75084555124e94657e columnFamilyName f 2023-07-18 12:14:50,208 INFO [StoreOpener-957d6d25fe63f11ee60426f814ac18a9-1] regionserver.HStore(310): Store=957d6d25fe63f11ee60426f814ac18a9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:50,209 INFO [StoreOpener-414c3efdf1678e75084555124e94657e-1] regionserver.HStore(310): Store=414c3efdf1678e75084555124e94657e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:50,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=56 2023-07-18 12:14:50,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=56, state=SUCCESS; OpenRegionProcedure b4f7304630b3d76a09c4679770272ad3, server=jenkins-hbase4.apache.org,35237,1689682479509 in 206 msec 2023-07-18 12:14:50,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,211 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, ASSIGN in 374 msec 2023-07-18 12:14:50,213 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, ASSIGN in 377 msec 2023-07-18 12:14:50,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:50,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 957d6d25fe63f11ee60426f814ac18a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10459328160, jitterRate=-0.02589915692806244}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:50,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 957d6d25fe63f11ee60426f814ac18a9: 2023-07-18 12:14:50,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9., pid=59, masterSystemTime=1689682490149 2023-07-18 12:14:50,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:50,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d851e4d0be73eb6e035b6b6d1f404c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 12:14:50,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,226 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=957d6d25fe63f11ee60426f814ac18a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:50,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:50,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 414c3efdf1678e75084555124e94657e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9944124320, jitterRate=-0.07388125360012054}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:50,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 414c3efdf1678e75084555124e94657e: 2023-07-18 12:14:50,226 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682490225"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682490225"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682490225"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682490225"}]},"ts":"1689682490225"} 2023-07-18 12:14:50,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,227 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e., pid=58, masterSystemTime=1689682490146 2023-07-18 12:14:50,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,230 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=414c3efdf1678e75084555124e94657e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:50,230 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682489257.414c3efdf1678e75084555124e94657e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490230"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682490230"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682490230"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682490230"}]},"ts":"1689682490230"} 2023-07-18 12:14:50,231 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=53 2023-07-18 12:14:50,231 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=53, state=SUCCESS; OpenRegionProcedure 957d6d25fe63f11ee60426f814ac18a9, server=jenkins-hbase4.apache.org,41985,1689682479721 in 234 msec 2023-07-18 12:14:50,234 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, ASSIGN in 401 msec 2023-07-18 12:14:50,236 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-18 12:14:50,236 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; OpenRegionProcedure 414c3efdf1678e75084555124e94657e, server=jenkins-hbase4.apache.org,35237,1689682479509 in 239 msec 2023-07-18 12:14:50,243 INFO [StoreOpener-9d851e4d0be73eb6e035b6b6d1f404c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,245 DEBUG [StoreOpener-9d851e4d0be73eb6e035b6b6d1f404c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/f 2023-07-18 12:14:50,245 DEBUG [StoreOpener-9d851e4d0be73eb6e035b6b6d1f404c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/f 2023-07-18 12:14:50,246 INFO [StoreOpener-9d851e4d0be73eb6e035b6b6d1f404c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d851e4d0be73eb6e035b6b6d1f404c7 columnFamilyName f 2023-07-18 12:14:50,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, ASSIGN in 403 msec 2023-07-18 12:14:50,248 INFO [StoreOpener-9d851e4d0be73eb6e035b6b6d1f404c7-1] regionserver.HStore(310): Store=9d851e4d0be73eb6e035b6b6d1f404c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:50,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:50,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d851e4d0be73eb6e035b6b6d1f404c7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11785511360, jitterRate=0.09761127829551697}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:50,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d851e4d0be73eb6e035b6b6d1f404c7: 2023-07-18 12:14:50,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7., pid=61, masterSystemTime=1689682490149 2023-07-18 12:14:50,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,261 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9d851e4d0be73eb6e035b6b6d1f404c7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:50,261 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490261"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682490261"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682490261"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682490261"}]},"ts":"1689682490261"} 2023-07-18 12:14:50,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=54 2023-07-18 12:14:50,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=54, state=SUCCESS; OpenRegionProcedure 9d851e4d0be73eb6e035b6b6d1f404c7, server=jenkins-hbase4.apache.org,41985,1689682479721 in 267 msec 2023-07-18 12:14:50,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=52 2023-07-18 12:14:50,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, ASSIGN in 434 msec 2023-07-18 12:14:50,271 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682490271"}]},"ts":"1689682490271"} 2023-07-18 12:14:50,282 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 12:14:50,286 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-18 12:14:50,289 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.0870 sec 2023-07-18 12:14:50,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 12:14:50,316 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-18 12:14:50,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:50,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:50,320 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-18 12:14:50,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682490326"}]},"ts":"1689682490326"} 2023-07-18 12:14:50,328 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 12:14:50,331 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 12:14:50,333 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, UNASSIGN}] 2023-07-18 12:14:50,336 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, UNASSIGN 2023-07-18 12:14:50,336 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, UNASSIGN 2023-07-18 12:14:50,336 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, UNASSIGN 2023-07-18 12:14:50,337 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, UNASSIGN 2023-07-18 12:14:50,337 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, UNASSIGN 2023-07-18 12:14:50,338 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=e40fcd59566f3a52877ff44805a039ed, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:50,338 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=b4f7304630b3d76a09c4679770272ad3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:50,338 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682490338"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682490338"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682490338"}]},"ts":"1689682490338"} 2023-07-18 12:14:50,338 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490338"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682490338"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682490338"}]},"ts":"1689682490338"} 2023-07-18 12:14:50,339 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=414c3efdf1678e75084555124e94657e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:50,339 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682489257.414c3efdf1678e75084555124e94657e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490339"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682490339"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682490339"}]},"ts":"1689682490339"} 2023-07-18 12:14:50,339 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=9d851e4d0be73eb6e035b6b6d1f404c7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:50,340 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490339"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682490339"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682490339"}]},"ts":"1689682490339"} 2023-07-18 12:14:50,340 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=957d6d25fe63f11ee60426f814ac18a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:50,340 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682490340"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682490340"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682490340"}]},"ts":"1689682490340"} 2023-07-18 12:14:50,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=68, state=RUNNABLE; CloseRegionProcedure e40fcd59566f3a52877ff44805a039ed, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:50,343 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=67, state=RUNNABLE; CloseRegionProcedure b4f7304630b3d76a09c4679770272ad3, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:50,344 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=66, state=RUNNABLE; CloseRegionProcedure 414c3efdf1678e75084555124e94657e, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:50,345 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=65, state=RUNNABLE; CloseRegionProcedure 9d851e4d0be73eb6e035b6b6d1f404c7, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:50,346 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=64, state=RUNNABLE; CloseRegionProcedure 957d6d25fe63f11ee60426f814ac18a9, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:50,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-18 12:14:50,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 957d6d25fe63f11ee60426f814ac18a9, disabling compactions & flushes 2023-07-18 12:14:50,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. after waiting 0 ms 2023-07-18 12:14:50,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 414c3efdf1678e75084555124e94657e, disabling compactions & flushes 2023-07-18 12:14:50,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. after waiting 0 ms 2023-07-18 12:14:50,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:50,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9. 2023-07-18 12:14:50,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 957d6d25fe63f11ee60426f814ac18a9: 2023-07-18 12:14:50,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:50,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e. 2023-07-18 12:14:50,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 414c3efdf1678e75084555124e94657e: 2023-07-18 12:14:50,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e40fcd59566f3a52877ff44805a039ed, disabling compactions & flushes 2023-07-18 12:14:50,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. after waiting 0 ms 2023-07-18 12:14:50,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,510 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=957d6d25fe63f11ee60426f814ac18a9, regionState=CLOSED 2023-07-18 12:14:50,511 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682490510"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682490510"}]},"ts":"1689682490510"} 2023-07-18 12:14:50,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,512 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=414c3efdf1678e75084555124e94657e, regionState=CLOSED 2023-07-18 12:14:50,513 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682489257.414c3efdf1678e75084555124e94657e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490512"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682490512"}]},"ts":"1689682490512"} 2023-07-18 12:14:50,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b4f7304630b3d76a09c4679770272ad3, disabling compactions & flushes 2023-07-18 12:14:50,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. after waiting 0 ms 2023-07-18 12:14:50,515 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:50,518 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=64 2023-07-18 12:14:50,519 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=64, state=SUCCESS; CloseRegionProcedure 957d6d25fe63f11ee60426f814ac18a9, server=jenkins-hbase4.apache.org,41985,1689682479721 in 169 msec 2023-07-18 12:14:50,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed. 2023-07-18 12:14:50,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e40fcd59566f3a52877ff44805a039ed: 2023-07-18 12:14:50,520 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-18 12:14:50,521 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; CloseRegionProcedure 414c3efdf1678e75084555124e94657e, server=jenkins-hbase4.apache.org,35237,1689682479509 in 172 msec 2023-07-18 12:14:50,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:50,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3. 2023-07-18 12:14:50,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b4f7304630b3d76a09c4679770272ad3: 2023-07-18 12:14:50,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d851e4d0be73eb6e035b6b6d1f404c7, disabling compactions & flushes 2023-07-18 12:14:50,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. after waiting 0 ms 2023-07-18 12:14:50,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=957d6d25fe63f11ee60426f814ac18a9, UNASSIGN in 186 msec 2023-07-18 12:14:50,523 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=e40fcd59566f3a52877ff44805a039ed, regionState=CLOSED 2023-07-18 12:14:50,524 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689682490523"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682490523"}]},"ts":"1689682490523"} 2023-07-18 12:14:50,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=414c3efdf1678e75084555124e94657e, UNASSIGN in 187 msec 2023-07-18 12:14:50,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,526 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=b4f7304630b3d76a09c4679770272ad3, regionState=CLOSED 2023-07-18 12:14:50,526 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490526"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682490526"}]},"ts":"1689682490526"} 2023-07-18 12:14:50,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:50,533 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7. 2023-07-18 12:14:50,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d851e4d0be73eb6e035b6b6d1f404c7: 2023-07-18 12:14:50,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=68 2023-07-18 12:14:50,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=68, state=SUCCESS; CloseRegionProcedure e40fcd59566f3a52877ff44805a039ed, server=jenkins-hbase4.apache.org,41985,1689682479721 in 187 msec 2023-07-18 12:14:50,534 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=67 2023-07-18 12:14:50,535 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=67, state=SUCCESS; CloseRegionProcedure b4f7304630b3d76a09c4679770272ad3, server=jenkins-hbase4.apache.org,35237,1689682479509 in 185 msec 2023-07-18 12:14:50,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e40fcd59566f3a52877ff44805a039ed, UNASSIGN in 201 msec 2023-07-18 12:14:50,536 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=9d851e4d0be73eb6e035b6b6d1f404c7, regionState=CLOSED 2023-07-18 12:14:50,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b4f7304630b3d76a09c4679770272ad3, UNASSIGN in 201 msec 2023-07-18 12:14:50,536 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689682490536"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682490536"}]},"ts":"1689682490536"} 2023-07-18 12:14:50,541 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=65 2023-07-18 12:14:50,541 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=65, state=SUCCESS; CloseRegionProcedure 9d851e4d0be73eb6e035b6b6d1f404c7, server=jenkins-hbase4.apache.org,41985,1689682479721 in 193 msec 2023-07-18 12:14:50,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=63 2023-07-18 12:14:50,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d851e4d0be73eb6e035b6b6d1f404c7, UNASSIGN in 208 msec 2023-07-18 12:14:50,549 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682490549"}]},"ts":"1689682490549"} 2023-07-18 12:14:50,550 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 12:14:50,559 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 12:14:50,561 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 240 msec 2023-07-18 12:14:50,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-18 12:14:50,629 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-18 12:14:50,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,645 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1982584964' 2023-07-18 12:14:50,647 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:50,666 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-18 12:14:50,666 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,666 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,666 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,670 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/recovered.edits] 2023-07-18 12:14:50,671 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/recovered.edits] 2023-07-18 12:14:50,671 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/recovered.edits] 2023-07-18 12:14:50,671 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/recovered.edits] 2023-07-18 12:14:50,671 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/recovered.edits] 2023-07-18 12:14:50,683 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e/recovered.edits/4.seqid 2023-07-18 12:14:50,684 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed/recovered.edits/4.seqid 2023-07-18 12:14:50,684 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/414c3efdf1678e75084555124e94657e 2023-07-18 12:14:50,685 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7/recovered.edits/4.seqid 2023-07-18 12:14:50,685 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9/recovered.edits/4.seqid 2023-07-18 12:14:50,686 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e40fcd59566f3a52877ff44805a039ed 2023-07-18 12:14:50,686 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d851e4d0be73eb6e035b6b6d1f404c7 2023-07-18 12:14:50,686 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/957d6d25fe63f11ee60426f814ac18a9 2023-07-18 12:14:50,693 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3/recovered.edits/4.seqid 2023-07-18 12:14:50,694 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b4f7304630b3d76a09c4679770272ad3 2023-07-18 12:14:50,694 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 12:14:50,697 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,704 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 12:14:50,706 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682490708"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682490708"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689682489257.414c3efdf1678e75084555124e94657e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682490708"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682490708"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:50,708 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682490708"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:50,711 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 12:14:50,711 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 957d6d25fe63f11ee60426f814ac18a9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689682489257.957d6d25fe63f11ee60426f814ac18a9.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 9d851e4d0be73eb6e035b6b6d1f404c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689682489257.9d851e4d0be73eb6e035b6b6d1f404c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 414c3efdf1678e75084555124e94657e, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689682489257.414c3efdf1678e75084555124e94657e.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b4f7304630b3d76a09c4679770272ad3, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689682489257.b4f7304630b3d76a09c4679770272ad3.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e40fcd59566f3a52877ff44805a039ed, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689682489257.e40fcd59566f3a52877ff44805a039ed.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 12:14:50,711 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 12:14:50,711 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682490711"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:50,713 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 12:14:50,715 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 12:14:50,717 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 80 msec 2023-07-18 12:14:50,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-18 12:14:50,769 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-18 12:14:50,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:50,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:50,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:50,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:50,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:14:50,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:50,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1982584964, current retry=0 2023-07-18 12:14:50,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1982584964 => default 2023-07-18 12:14:50,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:50,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1982584964 2023-07-18 12:14:50,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:14:50,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:50,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:50,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:50,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:50,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:50,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:50,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:50,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:50,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:50,822 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:50,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:50,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:50,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:50,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:50,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:50,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 151 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683690837, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:50,838 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:50,840 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:50,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,841 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:50,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:50,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:50,874 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=500 (was 418) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:33418 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-631 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:60394 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae-prefix:jenkins-hbase4.apache.org,44567,1689682483625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44567Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1276618464_17 at /127.0.0.1:60140 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44567 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:60420 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:40674 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:40638 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:33452 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae-prefix:jenkins-hbase4.apache.org,44567,1689682483625.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44567-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-632-acceptor-0@7095516b-ServerConnector@614a5820{HTTP/1.1, (http/1.1)}{0.0.0.0:36375} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-562778054_17 at /127.0.0.1:40732 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50805@0x65354fdb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp733013092-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:46497 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50805@0x65354fdb-SendThread(127.0.0.1:50805) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7ff3ba12-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1178471211_17 at /127.0.0.1:33504 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:46497 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50805@0x65354fdb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x120ad869-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=808 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=474 (was 429) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=3090 (was 3781) 2023-07-18 12:14:50,895 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=502, OpenFileDescriptor=808, MaxFileDescriptor=60000, SystemLoadAverage=474, ProcessCount=176, AvailableMemoryMB=3090 2023-07-18 12:14:50,895 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-18 12:14:50,895 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-18 12:14:50,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:50,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:50,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:50,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:50,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:50,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:50,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:50,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:50,915 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:50,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:50,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:50,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:50,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:50,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 179 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683690930, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:50,931 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:50,933 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:50,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,934 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:50,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:50,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:50,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-18 12:14:50,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:50,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:51504 deadline: 1689683690940, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 12:14:50,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-18 12:14:50,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:50,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:51504 deadline: 1689683690941, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 12:14:50,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-18 12:14:50,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:50,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 189 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:51504 deadline: 1689683690943, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 12:14:50,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-18 12:14:50,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-18 12:14:50,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:50,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:50,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:50,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:50,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:50,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:50,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:50,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:50,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:50,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-18 12:14:50,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:14:50,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:50,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:50,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:50,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:50,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:50,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:50,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:50,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:50,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:50,988 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:50,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:50,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:50,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:50,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:50,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:50,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:51,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:51,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 223 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683691002, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:51,003 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:51,005 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:51,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,007 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:51,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:51,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:51,030 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=505 (was 502) Potentially hanging thread: hconnection-0x120ad869-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=808 (was 808), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=474 (was 474), ProcessCount=176 (was 176), AvailableMemoryMB=3088 (was 3090) 2023-07-18 12:14:51,030 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-18 12:14:51,066 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=505, OpenFileDescriptor=808, MaxFileDescriptor=60000, SystemLoadAverage=474, ProcessCount=176, AvailableMemoryMB=3086 2023-07-18 12:14:51,066 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-18 12:14:51,066 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-18 12:14:51,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:51,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:51,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:51,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:51,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:51,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:51,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:51,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:51,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:51,089 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:51,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:51,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:51,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:51,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:51,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:51,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:51,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:51,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 251 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683691107, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:51,108 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:51,110 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:51,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,112 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:51,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:51,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:51,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:51,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:51,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-18 12:14:51,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:51,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 12:14:51,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:51,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:51,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:51,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:51,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:51,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:35237] to rsgroup bar 2023-07-18 12:14:51,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:51,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 12:14:51,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:51,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:51,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-18 12:14:51,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 12:14:51,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 12:14:51,146 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 12:14:51,147 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44567,1689682483625, state=CLOSING 2023-07-18 12:14:51,149 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:51,149 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:51,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:14:51,303 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 12:14:51,304 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:14:51,304 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:14:51,304 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:14:51,304 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:14:51,304 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:14:51,304 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.60 KB heapSize=57.66 KB 2023-07-18 12:14:51,342 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.71 KB at sequenceid=96 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/info/9e66c121ae444a36aaa2875b082770a8 2023-07-18 12:14:51,352 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e66c121ae444a36aaa2875b082770a8 2023-07-18 12:14:51,376 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=96 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/rep_barrier/1d9f3e3828fd4e509ad99ac349652399 2023-07-18 12:14:51,383 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1d9f3e3828fd4e509ad99ac349652399 2023-07-18 12:14:51,410 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=96 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/table/59fae7f2af36403ebb27d1415b97a977 2023-07-18 12:14:51,417 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 59fae7f2af36403ebb27d1415b97a977 2023-07-18 12:14:51,419 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/info/9e66c121ae444a36aaa2875b082770a8 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/9e66c121ae444a36aaa2875b082770a8 2023-07-18 12:14:51,427 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e66c121ae444a36aaa2875b082770a8 2023-07-18 12:14:51,427 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/9e66c121ae444a36aaa2875b082770a8, entries=20, sequenceid=96, filesize=7.1 K 2023-07-18 12:14:51,429 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/rep_barrier/1d9f3e3828fd4e509ad99ac349652399 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier/1d9f3e3828fd4e509ad99ac349652399 2023-07-18 12:14:51,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1d9f3e3828fd4e509ad99ac349652399 2023-07-18 12:14:51,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier/1d9f3e3828fd4e509ad99ac349652399, entries=10, sequenceid=96, filesize=6.1 K 2023-07-18 12:14:51,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/table/59fae7f2af36403ebb27d1415b97a977 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/59fae7f2af36403ebb27d1415b97a977 2023-07-18 12:14:51,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 59fae7f2af36403ebb27d1415b97a977 2023-07-18 12:14:51,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/59fae7f2af36403ebb27d1415b97a977, entries=11, sequenceid=96, filesize=6.0 K 2023-07-18 12:14:51,450 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.60 KB/38501, heapSize ~57.62 KB/59000, currentSize=0 B/0 for 1588230740 in 146ms, sequenceid=96, compaction requested=false 2023-07-18 12:14:51,470 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/recovered.edits/99.seqid, newMaxSeqId=99, maxSeqId=17 2023-07-18 12:14:51,470 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:14:51,471 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:14:51,472 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:14:51,472 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44601,1689682479947 record at close sequenceid=96 2023-07-18 12:14:51,474 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 12:14:51,479 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 12:14:51,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-18 12:14:51,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44567,1689682483625 in 330 msec 2023-07-18 12:14:51,482 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:51,633 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44601,1689682479947, state=OPENING 2023-07-18 12:14:51,634 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:51,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:51,639 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:51,796 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 12:14:51,796 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:14:51,798 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44601%2C1689682479947.meta, suffix=.meta, logDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44601,1689682479947, archiveDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs, maxLogs=32 2023-07-18 12:14:51,814 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK] 2023-07-18 12:14:51,815 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK] 2023-07-18 12:14:51,819 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK] 2023-07-18 12:14:51,823 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/WALs/jenkins-hbase4.apache.org,44601,1689682479947/jenkins-hbase4.apache.org%2C44601%2C1689682479947.meta.1689682491799.meta 2023-07-18 12:14:51,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35987,DS-bb0055bf-2583-488f-88cd-6e67586120a0,DISK], DatanodeInfoWithStorage[127.0.0.1:43123,DS-5c0e3810-a9f8-497e-b70c-cd48867c9bc5,DISK], DatanodeInfoWithStorage[127.0.0.1:43097,DS-acee68b2-b2f3-463b-98fb-ebaa65429ad7,DISK]] 2023-07-18 12:14:51,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:51,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:14:51,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 12:14:51,827 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 12:14:51,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 12:14:51,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:51,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 12:14:51,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 12:14:51,828 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:14:51,830 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:51,830 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info 2023-07-18 12:14:51,830 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:14:51,844 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e66c121ae444a36aaa2875b082770a8 2023-07-18 12:14:51,844 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/9e66c121ae444a36aaa2875b082770a8 2023-07-18 12:14:51,851 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/dab18fbcc5e94104a42c584316cb4eb2 2023-07-18 12:14:51,851 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:51,851 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:14:51,853 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:51,853 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:14:51,853 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:14:51,861 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1d9f3e3828fd4e509ad99ac349652399 2023-07-18 12:14:51,861 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier/1d9f3e3828fd4e509ad99ac349652399 2023-07-18 12:14:51,862 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:51,862 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:14:51,863 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:51,863 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table 2023-07-18 12:14:51,864 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:14:51,873 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/311afbd109b8425fabe21920058a11b6 2023-07-18 12:14:51,880 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 59fae7f2af36403ebb27d1415b97a977 2023-07-18 12:14:51,882 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/59fae7f2af36403ebb27d1415b97a977 2023-07-18 12:14:51,883 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:51,884 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:51,885 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740 2023-07-18 12:14:51,888 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:14:51,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:14:51,891 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=100; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10747453280, jitterRate=9.345859289169312E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:14:51,892 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:14:51,893 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=77, masterSystemTime=1689682491791 2023-07-18 12:14:51,900 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 12:14:51,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 12:14:51,900 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44601,1689682479947, state=OPEN 2023-07-18 12:14:51,902 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:14:51,905 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:14:51,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-18 12:14:51,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44601,1689682479947 in 267 msec 2023-07-18 12:14:51,911 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 766 msec 2023-07-18 12:14:52,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-18 12:14:52,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721, jenkins-hbase4.apache.org,44567,1689682483625] are moved back to default 2023-07-18 12:14:52,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-18 12:14:52,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:52,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:52,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:52,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 12:14:52,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:52,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:52,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:52,159 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:52,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-18 12:14:52,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-18 12:14:52,163 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:52,164 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 12:14:52,164 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:52,165 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:52,175 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:52,176 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44567] ipc.CallRunner(144): callId: 180 service: ClientService methodName: Get size: 142 connection: 172.31.14.131:36984 deadline: 1689682552176, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44601 startCode=1689682479947. As of locationSeqNum=96. 2023-07-18 12:14:52,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-18 12:14:52,279 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,280 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 empty. 2023-07-18 12:14:52,280 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,280 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 12:14:52,304 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:52,305 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 853b80efdfa7091744603fdaa4a82ca2, NAME => 'Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:52,319 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:52,320 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 853b80efdfa7091744603fdaa4a82ca2, disabling compactions & flushes 2023-07-18 12:14:52,320 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,320 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,320 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. after waiting 0 ms 2023-07-18 12:14:52,320 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,320 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,320 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:52,322 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:52,323 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682492323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682492323"}]},"ts":"1689682492323"} 2023-07-18 12:14:52,325 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:14:52,326 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:52,326 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682492326"}]},"ts":"1689682492326"} 2023-07-18 12:14:52,327 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-18 12:14:52,330 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, ASSIGN}] 2023-07-18 12:14:52,333 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, ASSIGN 2023-07-18 12:14:52,335 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:52,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-18 12:14:52,486 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:52,486 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682492486"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682492486"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682492486"}]},"ts":"1689682492486"} 2023-07-18 12:14:52,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:52,645 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 853b80efdfa7091744603fdaa4a82ca2, NAME => 'Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:52,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:52,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,647 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,648 DEBUG [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f 2023-07-18 12:14:52,648 DEBUG [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f 2023-07-18 12:14:52,649 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 853b80efdfa7091744603fdaa4a82ca2 columnFamilyName f 2023-07-18 12:14:52,649 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] regionserver.HStore(310): Store=853b80efdfa7091744603fdaa4a82ca2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:52,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:52,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:52,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 853b80efdfa7091744603fdaa4a82ca2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10013006880, jitterRate=-0.06746606528759003}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:52,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:52,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2., pid=80, masterSystemTime=1689682492640 2023-07-18 12:14:52,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,659 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:52,659 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:52,660 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682492659"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682492659"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682492659"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682492659"}]},"ts":"1689682492659"} 2023-07-18 12:14:52,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-18 12:14:52,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947 in 173 msec 2023-07-18 12:14:52,664 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-18 12:14:52,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, ASSIGN in 333 msec 2023-07-18 12:14:52,665 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:52,665 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682492665"}]},"ts":"1689682492665"} 2023-07-18 12:14:52,667 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-18 12:14:52,669 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:52,670 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 515 msec 2023-07-18 12:14:52,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-18 12:14:52,767 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-18 12:14:52,767 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-18 12:14:52,767 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:52,768 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44567] ipc.CallRunner(144): callId: 280 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:36992 deadline: 1689682552768, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44601 startCode=1689682479947. As of locationSeqNum=96. 2023-07-18 12:14:52,870 DEBUG [hconnection-0x497c82a-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:14:52,872 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51864, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:14:52,883 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-18 12:14:52,883 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:52,883 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-18 12:14:52,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-18 12:14:52,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:52,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 12:14:52,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:52,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:52,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-18 12:14:52,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 853b80efdfa7091744603fdaa4a82ca2 to RSGroup bar 2023-07-18 12:14:52,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:52,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:52,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:52,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:52,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 12:14:52,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:52,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE 2023-07-18 12:14:52,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-18 12:14:52,894 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE 2023-07-18 12:14:52,895 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:52,895 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682492895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682492895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682492895"}]},"ts":"1689682492895"} 2023-07-18 12:14:52,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:53,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 853b80efdfa7091744603fdaa4a82ca2, disabling compactions & flushes 2023-07-18 12:14:53,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. after waiting 0 ms 2023-07-18 12:14:53,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:53,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:53,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 853b80efdfa7091744603fdaa4a82ca2 move to jenkins-hbase4.apache.org,35237,1689682479509 record at close sequenceid=2 2023-07-18 12:14:53,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,059 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=CLOSED 2023-07-18 12:14:53,060 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682493059"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682493059"}]},"ts":"1689682493059"} 2023-07-18 12:14:53,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-18 12:14:53,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947 in 164 msec 2023-07-18 12:14:53,064 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:53,214 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:53,215 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:53,215 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682493215"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682493215"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682493215"}]},"ts":"1689682493215"} 2023-07-18 12:14:53,216 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 12:14:53,217 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:53,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 853b80efdfa7091744603fdaa4a82ca2, NAME => 'Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:53,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:53,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,375 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,376 DEBUG [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f 2023-07-18 12:14:53,376 DEBUG [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f 2023-07-18 12:14:53,377 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 853b80efdfa7091744603fdaa4a82ca2 columnFamilyName f 2023-07-18 12:14:53,377 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] regionserver.HStore(310): Store=853b80efdfa7091744603fdaa4a82ca2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:53,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:53,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 853b80efdfa7091744603fdaa4a82ca2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9741940800, jitterRate=-0.09271106123924255}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:53,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:53,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2., pid=83, masterSystemTime=1689682493369 2023-07-18 12:14:53,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:53,387 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:53,387 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682493387"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682493387"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682493387"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682493387"}]},"ts":"1689682493387"} 2023-07-18 12:14:53,392 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-18 12:14:53,392 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,35237,1689682479509 in 172 msec 2023-07-18 12:14:53,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE in 500 msec 2023-07-18 12:14:53,822 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-18 12:14:53,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-18 12:14:53,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-18 12:14:53,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:53,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:53,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:53,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 12:14:53,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:53,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 12:14:53,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:53,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 290 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:51504 deadline: 1689683693903, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-18 12:14:53,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:14:53,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:53,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 292 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:51504 deadline: 1689683693905, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-18 12:14:53,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-18 12:14:53,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:53,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 12:14:53,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:53,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:53,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-18 12:14:53,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 853b80efdfa7091744603fdaa4a82ca2 to RSGroup default 2023-07-18 12:14:53,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE 2023-07-18 12:14:53,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 12:14:53,935 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE 2023-07-18 12:14:53,936 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:53,936 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682493936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682493936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682493936"}]},"ts":"1689682493936"} 2023-07-18 12:14:53,938 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:54,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 853b80efdfa7091744603fdaa4a82ca2, disabling compactions & flushes 2023-07-18 12:14:54,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. after waiting 0 ms 2023-07-18 12:14:54,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:54,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:54,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 853b80efdfa7091744603fdaa4a82ca2 move to jenkins-hbase4.apache.org,44601,1689682479947 record at close sequenceid=5 2023-07-18 12:14:54,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,103 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=CLOSED 2023-07-18 12:14:54,103 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682494103"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682494103"}]},"ts":"1689682494103"} 2023-07-18 12:14:54,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-18 12:14:54,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,35237,1689682479509 in 169 msec 2023-07-18 12:14:54,111 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:54,262 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:54,262 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682494262"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682494262"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682494262"}]},"ts":"1689682494262"} 2023-07-18 12:14:54,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:54,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 853b80efdfa7091744603fdaa4a82ca2, NAME => 'Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:54,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:54,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,422 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,423 DEBUG [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f 2023-07-18 12:14:54,423 DEBUG [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f 2023-07-18 12:14:54,424 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 853b80efdfa7091744603fdaa4a82ca2 columnFamilyName f 2023-07-18 12:14:54,424 INFO [StoreOpener-853b80efdfa7091744603fdaa4a82ca2-1] regionserver.HStore(310): Store=853b80efdfa7091744603fdaa4a82ca2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:54,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:54,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 853b80efdfa7091744603fdaa4a82ca2; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10139730560, jitterRate=-0.055664002895355225}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:54,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:54,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2., pid=86, masterSystemTime=1689682494416 2023-07-18 12:14:54,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:54,444 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:54,444 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682494444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682494444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682494444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682494444"}]},"ts":"1689682494444"} 2023-07-18 12:14:54,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-18 12:14:54,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947 in 182 msec 2023-07-18 12:14:54,450 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, REOPEN/MOVE in 529 msec 2023-07-18 12:14:54,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-18 12:14:54,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-18 12:14:54,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:54,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:54,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:54,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 12:14:54,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:54,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 299 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:51504 deadline: 1689683694942, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-18 12:14:54,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:14:54,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:54,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 12:14:54,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:54,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:54,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-18 12:14:54,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721, jenkins-hbase4.apache.org,44567,1689682483625] are moved back to bar 2023-07-18 12:14:54,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-18 12:14:54,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:54,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:54,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:54,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 12:14:54,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:54,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:54,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:14:54,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:54,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:54,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:54,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:54,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:54,969 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-18 12:14:54,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-18 12:14:54,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:54,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-18 12:14:54,973 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682494973"}]},"ts":"1689682494973"} 2023-07-18 12:14:54,974 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-18 12:14:54,976 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-18 12:14:54,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, UNASSIGN}] 2023-07-18 12:14:54,980 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, UNASSIGN 2023-07-18 12:14:54,981 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:54,981 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682494980"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682494980"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682494980"}]},"ts":"1689682494980"} 2023-07-18 12:14:54,984 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:55,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-18 12:14:55,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:55,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 853b80efdfa7091744603fdaa4a82ca2, disabling compactions & flushes 2023-07-18 12:14:55,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:55,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:55,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. after waiting 0 ms 2023-07-18 12:14:55,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:55,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 12:14:55,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2. 2023-07-18 12:14:55,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 853b80efdfa7091744603fdaa4a82ca2: 2023-07-18 12:14:55,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:55,147 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=853b80efdfa7091744603fdaa4a82ca2, regionState=CLOSED 2023-07-18 12:14:55,147 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689682495147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682495147"}]},"ts":"1689682495147"} 2023-07-18 12:14:55,150 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-18 12:14:55,150 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure 853b80efdfa7091744603fdaa4a82ca2, server=jenkins-hbase4.apache.org,44601,1689682479947 in 166 msec 2023-07-18 12:14:55,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-18 12:14:55,152 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=853b80efdfa7091744603fdaa4a82ca2, UNASSIGN in 173 msec 2023-07-18 12:14:55,152 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682495152"}]},"ts":"1689682495152"} 2023-07-18 12:14:55,153 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-18 12:14:55,155 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-18 12:14:55,157 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 187 msec 2023-07-18 12:14:55,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-18 12:14:55,276 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-18 12:14:55,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-18 12:14:55,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:55,280 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:55,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-18 12:14:55,281 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:55,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:55,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:55,290 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:55,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 12:14:55,293 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits] 2023-07-18 12:14:55,307 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits/10.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2/recovered.edits/10.seqid 2023-07-18 12:14:55,308 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testFailRemoveGroup/853b80efdfa7091744603fdaa4a82ca2 2023-07-18 12:14:55,308 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 12:14:55,311 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:55,314 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-18 12:14:55,316 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-18 12:14:55,318 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:55,318 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-18 12:14:55,318 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682495318"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:55,322 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 12:14:55,322 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 853b80efdfa7091744603fdaa4a82ca2, NAME => 'Group_testFailRemoveGroup,,1689682492153.853b80efdfa7091744603fdaa4a82ca2.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 12:14:55,322 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-18 12:14:55,322 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682495322"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:55,334 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-18 12:14:55,343 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 12:14:55,345 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 66 msec 2023-07-18 12:14:55,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 12:14:55,393 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-18 12:14:55,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:55,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:55,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:55,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:55,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:55,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:55,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:55,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:55,411 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:55,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:55,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:55,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:55,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:55,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:55,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:55,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 347 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683695429, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:55,430 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:55,432 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:55,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,433 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:55,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:55,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:55,458 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=523 (was 505) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741859_1035, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_102572005_17 at /127.0.0.1:40392 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_102572005_17 at /127.0.0.1:39012 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1026535489_17 at /127.0.0.1:37152 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741859_1035] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x497c82a-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_102572005_17 at /127.0.0.1:40418 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1026535489_17 at /127.0.0.1:39024 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741859_1035] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_102572005_17 at /127.0.0.1:40372 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_102572005_17 at /127.0.0.1:40732 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741859_1035, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae-prefix:jenkins-hbase4.apache.org,44601,1689682479947.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1178471211_17 at /127.0.0.1:39036 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_102572005_17 at /127.0.0.1:33504 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1026535489_17 at /127.0.0.1:40390 [Receiving block BP-1681315234-172.31.14.131-1689682473336:blk_1073741859_1035] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1026535489_17 at /127.0.0.1:37158 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1681315234-172.31.14.131-1689682473336:blk_1073741859_1035, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=830 (was 808) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=476 (was 474) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=2810 (was 3086) 2023-07-18 12:14:55,458 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-18 12:14:55,479 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=523, OpenFileDescriptor=830, MaxFileDescriptor=60000, SystemLoadAverage=476, ProcessCount=176, AvailableMemoryMB=2809 2023-07-18 12:14:55,479 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-18 12:14:55,479 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-18 12:14:55,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:55,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:55,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:55,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:55,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:55,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:55,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:55,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:55,497 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:55,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:55,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:55,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:55,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:55,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:55,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:55,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 375 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683695512, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:55,513 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:55,517 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:55,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,518 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:55,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:55,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:55,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:55,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:55,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_529096275 2023-07-18 12:14:55,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:55,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:55,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:55,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:55,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35237] to rsgroup Group_testMultiTableMove_529096275 2023-07-18 12:14:55,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:55,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:55,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:55,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 12:14:55,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509] are moved back to default 2023-07-18 12:14:55,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_529096275 2023-07-18 12:14:55,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:55,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:55,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:55,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_529096275 2023-07-18 12:14:55,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:55,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:55,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:55,556 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:55,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-18 12:14:55,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-18 12:14:55,558 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:55,558 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:55,558 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:55,559 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:55,564 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:55,565 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:55,566 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 empty. 2023-07-18 12:14:55,566 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:55,567 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 12:14:55,583 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:55,584 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9846415613b49a6afa8412aa7797af59, NAME => 'GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:55,601 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:55,601 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 9846415613b49a6afa8412aa7797af59, disabling compactions & flushes 2023-07-18 12:14:55,601 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:55,601 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:55,601 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. after waiting 0 ms 2023-07-18 12:14:55,601 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:55,601 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:55,601 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 9846415613b49a6afa8412aa7797af59: 2023-07-18 12:14:55,604 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:55,605 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682495604"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682495604"}]},"ts":"1689682495604"} 2023-07-18 12:14:55,606 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:14:55,607 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:55,607 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682495607"}]},"ts":"1689682495607"} 2023-07-18 12:14:55,608 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-18 12:14:55,611 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:55,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:55,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:55,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:55,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:55,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, ASSIGN}] 2023-07-18 12:14:55,614 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, ASSIGN 2023-07-18 12:14:55,615 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:55,853 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:55,857 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:55,857 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682495857"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682495857"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682495857"}]},"ts":"1689682495857"} 2023-07-18 12:14:55,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-18 12:14:55,859 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:56,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9846415613b49a6afa8412aa7797af59, NAME => 'GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:56,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:56,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,023 INFO [StoreOpener-9846415613b49a6afa8412aa7797af59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,025 DEBUG [StoreOpener-9846415613b49a6afa8412aa7797af59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/f 2023-07-18 12:14:56,026 DEBUG [StoreOpener-9846415613b49a6afa8412aa7797af59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/f 2023-07-18 12:14:56,026 INFO [StoreOpener-9846415613b49a6afa8412aa7797af59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9846415613b49a6afa8412aa7797af59 columnFamilyName f 2023-07-18 12:14:56,027 INFO [StoreOpener-9846415613b49a6afa8412aa7797af59-1] regionserver.HStore(310): Store=9846415613b49a6afa8412aa7797af59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:56,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:56,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9846415613b49a6afa8412aa7797af59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10337376160, jitterRate=-0.03725682199001312}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:56,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9846415613b49a6afa8412aa7797af59: 2023-07-18 12:14:56,044 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59., pid=93, masterSystemTime=1689682496011 2023-07-18 12:14:56,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,046 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:56,046 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496046"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682496046"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682496046"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682496046"}]},"ts":"1689682496046"} 2023-07-18 12:14:56,050 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-18 12:14:56,050 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,41985,1689682479721 in 189 msec 2023-07-18 12:14:56,052 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-18 12:14:56,052 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, ASSIGN in 438 msec 2023-07-18 12:14:56,053 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:56,053 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682496053"}]},"ts":"1689682496053"} 2023-07-18 12:14:56,055 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-18 12:14:56,057 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:56,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 504 msec 2023-07-18 12:14:56,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-18 12:14:56,060 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-18 12:14:56,060 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-18 12:14:56,060 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:56,068 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-18 12:14:56,068 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:56,068 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-18 12:14:56,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:56,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:56,073 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:56,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-18 12:14:56,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 12:14:56,080 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:56,081 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:56,081 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:56,082 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:56,084 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:56,086 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,087 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e empty. 2023-07-18 12:14:56,087 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,087 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 12:14:56,111 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:56,117 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => a02ce677f3940a04facb793bd9bbd80e, NAME => 'GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:56,142 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:56,142 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing a02ce677f3940a04facb793bd9bbd80e, disabling compactions & flushes 2023-07-18 12:14:56,142 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,142 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,142 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. after waiting 0 ms 2023-07-18 12:14:56,142 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,142 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,142 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for a02ce677f3940a04facb793bd9bbd80e: 2023-07-18 12:14:56,146 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:56,147 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682496147"}]},"ts":"1689682496147"} 2023-07-18 12:14:56,149 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:14:56,149 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:56,150 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682496150"}]},"ts":"1689682496150"} 2023-07-18 12:14:56,151 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-18 12:14:56,156 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:56,156 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:56,156 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:56,156 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:56,156 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:56,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, ASSIGN}] 2023-07-18 12:14:56,168 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, ASSIGN 2023-07-18 12:14:56,169 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41985,1689682479721; forceNewPlan=false, retain=false 2023-07-18 12:14:56,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 12:14:56,320 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:56,321 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:56,322 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496321"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682496321"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682496321"}]},"ts":"1689682496321"} 2023-07-18 12:14:56,324 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:56,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 12:14:56,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a02ce677f3940a04facb793bd9bbd80e, NAME => 'GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:56,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:56,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,483 INFO [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,485 DEBUG [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/f 2023-07-18 12:14:56,485 DEBUG [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/f 2023-07-18 12:14:56,485 INFO [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a02ce677f3940a04facb793bd9bbd80e columnFamilyName f 2023-07-18 12:14:56,486 INFO [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] regionserver.HStore(310): Store=a02ce677f3940a04facb793bd9bbd80e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:56,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:56,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a02ce677f3940a04facb793bd9bbd80e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10062104000, jitterRate=-0.06289353966712952}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:56,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a02ce677f3940a04facb793bd9bbd80e: 2023-07-18 12:14:56,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e., pid=96, masterSystemTime=1689682496476 2023-07-18 12:14:56,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,495 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,495 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:56,495 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496495"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682496495"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682496495"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682496495"}]},"ts":"1689682496495"} 2023-07-18 12:14:56,498 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-18 12:14:56,498 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,41985,1689682479721 in 173 msec 2023-07-18 12:14:56,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-18 12:14:56,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, ASSIGN in 341 msec 2023-07-18 12:14:56,501 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:56,501 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682496501"}]},"ts":"1689682496501"} 2023-07-18 12:14:56,502 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-18 12:14:56,505 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:56,506 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 435 msec 2023-07-18 12:14:56,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 12:14:56,678 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-18 12:14:56,678 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-18 12:14:56,678 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:56,682 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-18 12:14:56,682 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:56,682 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-18 12:14:56,683 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:56,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 12:14:56,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:14:56,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 12:14:56,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:14:56,694 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_529096275 2023-07-18 12:14:56,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_529096275 2023-07-18 12:14:56,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:56,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:56,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:56,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:56,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_529096275 2023-07-18 12:14:56,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region a02ce677f3940a04facb793bd9bbd80e to RSGroup Group_testMultiTableMove_529096275 2023-07-18 12:14:56,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, REOPEN/MOVE 2023-07-18 12:14:56,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_529096275 2023-07-18 12:14:56,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 9846415613b49a6afa8412aa7797af59 to RSGroup Group_testMultiTableMove_529096275 2023-07-18 12:14:56,704 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, REOPEN/MOVE 2023-07-18 12:14:56,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, REOPEN/MOVE 2023-07-18 12:14:56,704 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:56,705 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, REOPEN/MOVE 2023-07-18 12:14:56,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_529096275, current retry=0 2023-07-18 12:14:56,705 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682496704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682496704"}]},"ts":"1689682496704"} 2023-07-18 12:14:56,706 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:14:56,706 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496706"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682496706"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682496706"}]},"ts":"1689682496706"} 2023-07-18 12:14:56,707 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:56,708 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,41985,1689682479721}] 2023-07-18 12:14:56,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a02ce677f3940a04facb793bd9bbd80e, disabling compactions & flushes 2023-07-18 12:14:56,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. after waiting 0 ms 2023-07-18 12:14:56,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:56,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:56,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a02ce677f3940a04facb793bd9bbd80e: 2023-07-18 12:14:56,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a02ce677f3940a04facb793bd9bbd80e move to jenkins-hbase4.apache.org,35237,1689682479509 record at close sequenceid=2 2023-07-18 12:14:56,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:56,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9846415613b49a6afa8412aa7797af59, disabling compactions & flushes 2023-07-18 12:14:56,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. after waiting 0 ms 2023-07-18 12:14:56,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,868 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=CLOSED 2023-07-18 12:14:56,869 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496868"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682496868"}]},"ts":"1689682496868"} 2023-07-18 12:14:56,872 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-18 12:14:56,872 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,41985,1689682479721 in 163 msec 2023-07-18 12:14:56,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:56,873 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:56,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:56,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9846415613b49a6afa8412aa7797af59: 2023-07-18 12:14:56,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9846415613b49a6afa8412aa7797af59 move to jenkins-hbase4.apache.org,35237,1689682479509 record at close sequenceid=2 2023-07-18 12:14:56,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:56,875 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=CLOSED 2023-07-18 12:14:56,875 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682496875"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682496875"}]},"ts":"1689682496875"} 2023-07-18 12:14:56,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-18 12:14:56,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,41985,1689682479721 in 168 msec 2023-07-18 12:14:56,878 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:14:57,023 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:57,023 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:57,024 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682497023"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682497023"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682497023"}]},"ts":"1689682497023"} 2023-07-18 12:14:57,024 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682497023"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682497023"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682497023"}]},"ts":"1689682497023"} 2023-07-18 12:14:57,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=98, state=RUNNABLE; OpenRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:57,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=97, state=RUNNABLE; OpenRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:57,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9846415613b49a6afa8412aa7797af59, NAME => 'GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:57,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:57,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,183 INFO [StoreOpener-9846415613b49a6afa8412aa7797af59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,184 DEBUG [StoreOpener-9846415613b49a6afa8412aa7797af59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/f 2023-07-18 12:14:57,184 DEBUG [StoreOpener-9846415613b49a6afa8412aa7797af59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/f 2023-07-18 12:14:57,185 INFO [StoreOpener-9846415613b49a6afa8412aa7797af59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9846415613b49a6afa8412aa7797af59 columnFamilyName f 2023-07-18 12:14:57,186 INFO [StoreOpener-9846415613b49a6afa8412aa7797af59-1] regionserver.HStore(310): Store=9846415613b49a6afa8412aa7797af59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:57,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9846415613b49a6afa8412aa7797af59; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9566975680, jitterRate=-0.10900595784187317}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:57,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9846415613b49a6afa8412aa7797af59: 2023-07-18 12:14:57,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59., pid=101, masterSystemTime=1689682497177 2023-07-18 12:14:57,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:57,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a02ce677f3940a04facb793bd9bbd80e, NAME => 'GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:57,196 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:57,196 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682497196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682497196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682497196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682497196"}]},"ts":"1689682497196"} 2023-07-18 12:14:57,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:57,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,198 INFO [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,199 DEBUG [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/f 2023-07-18 12:14:57,199 DEBUG [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/f 2023-07-18 12:14:57,200 INFO [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a02ce677f3940a04facb793bd9bbd80e columnFamilyName f 2023-07-18 12:14:57,201 INFO [StoreOpener-a02ce677f3940a04facb793bd9bbd80e-1] regionserver.HStore(310): Store=a02ce677f3940a04facb793bd9bbd80e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:57,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=98 2023-07-18 12:14:57,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=98, state=SUCCESS; OpenRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,35237,1689682479509 in 173 msec 2023-07-18 12:14:57,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, REOPEN/MOVE in 500 msec 2023-07-18 12:14:57,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:57,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a02ce677f3940a04facb793bd9bbd80e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10749864000, jitterRate=0.0011591017246246338}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:57,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a02ce677f3940a04facb793bd9bbd80e: 2023-07-18 12:14:57,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e., pid=102, masterSystemTime=1689682497177 2023-07-18 12:14:57,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:57,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:57,221 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:57,221 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682497221"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682497221"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682497221"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682497221"}]},"ts":"1689682497221"} 2023-07-18 12:14:57,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=97 2023-07-18 12:14:57,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=97, state=SUCCESS; OpenRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,35237,1689682479509 in 197 msec 2023-07-18 12:14:57,227 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, REOPEN/MOVE in 523 msec 2023-07-18 12:14:57,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-18 12:14:57,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_529096275. 2023-07-18 12:14:57,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:57,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:57,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:57,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 12:14:57,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:14:57,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 12:14:57,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:14:57,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:57,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:57,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_529096275 2023-07-18 12:14:57,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:57,726 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-18 12:14:57,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-18 12:14:57,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:57,734 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682497734"}]},"ts":"1689682497734"} 2023-07-18 12:14:57,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-18 12:14:57,735 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-18 12:14:57,738 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-18 12:14:57,738 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, UNASSIGN}] 2023-07-18 12:14:57,740 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, UNASSIGN 2023-07-18 12:14:57,741 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:57,741 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682497741"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682497741"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682497741"}]},"ts":"1689682497741"} 2023-07-18 12:14:57,742 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:57,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-18 12:14:57,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9846415613b49a6afa8412aa7797af59, disabling compactions & flushes 2023-07-18 12:14:57,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. after waiting 0 ms 2023-07-18 12:14:57,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:57,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59. 2023-07-18 12:14:57,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9846415613b49a6afa8412aa7797af59: 2023-07-18 12:14:57,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:57,903 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=9846415613b49a6afa8412aa7797af59, regionState=CLOSED 2023-07-18 12:14:57,904 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682497903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682497903"}]},"ts":"1689682497903"} 2023-07-18 12:14:57,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-18 12:14:57,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 9846415613b49a6afa8412aa7797af59, server=jenkins-hbase4.apache.org,35237,1689682479509 in 163 msec 2023-07-18 12:14:57,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-18 12:14:57,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9846415613b49a6afa8412aa7797af59, UNASSIGN in 169 msec 2023-07-18 12:14:57,910 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682497910"}]},"ts":"1689682497910"} 2023-07-18 12:14:57,915 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-18 12:14:57,917 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-18 12:14:57,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 196 msec 2023-07-18 12:14:58,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-18 12:14:58,037 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-18 12:14:58,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-18 12:14:58,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:58,045 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:58,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_529096275' 2023-07-18 12:14:58,051 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:58,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:58,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:58,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 12:14:58,059 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:58,061 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/recovered.edits] 2023-07-18 12:14:58,067 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59/recovered.edits/7.seqid 2023-07-18 12:14:58,067 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveA/9846415613b49a6afa8412aa7797af59 2023-07-18 12:14:58,067 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 12:14:58,071 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:58,073 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-18 12:14:58,075 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-18 12:14:58,076 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:58,076 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-18 12:14:58,076 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682498076"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:58,078 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 12:14:58,078 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9846415613b49a6afa8412aa7797af59, NAME => 'GrouptestMultiTableMoveA,,1689682495552.9846415613b49a6afa8412aa7797af59.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 12:14:58,078 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-18 12:14:58,078 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682498078"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:58,080 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-18 12:14:58,082 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 12:14:58,084 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 43 msec 2023-07-18 12:14:58,148 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 12:14:58,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 12:14:58,158 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-18 12:14:58,158 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-18 12:14:58,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-18 12:14:58,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682498173"}]},"ts":"1689682498173"} 2023-07-18 12:14:58,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-18 12:14:58,175 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-18 12:14:58,185 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-18 12:14:58,195 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, UNASSIGN}] 2023-07-18 12:14:58,197 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, UNASSIGN 2023-07-18 12:14:58,198 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:14:58,198 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682498198"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682498198"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682498198"}]},"ts":"1689682498198"} 2023-07-18 12:14:58,199 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:14:58,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-18 12:14:58,352 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:58,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a02ce677f3940a04facb793bd9bbd80e, disabling compactions & flushes 2023-07-18 12:14:58,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:58,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:58,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. after waiting 0 ms 2023-07-18 12:14:58,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:58,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:14:58,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e. 2023-07-18 12:14:58,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a02ce677f3940a04facb793bd9bbd80e: 2023-07-18 12:14:58,360 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:58,361 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=a02ce677f3940a04facb793bd9bbd80e, regionState=CLOSED 2023-07-18 12:14:58,361 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689682498360"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682498360"}]},"ts":"1689682498360"} 2023-07-18 12:14:58,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-18 12:14:58,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure a02ce677f3940a04facb793bd9bbd80e, server=jenkins-hbase4.apache.org,35237,1689682479509 in 163 msec 2023-07-18 12:14:58,366 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-18 12:14:58,366 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a02ce677f3940a04facb793bd9bbd80e, UNASSIGN in 178 msec 2023-07-18 12:14:58,367 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682498367"}]},"ts":"1689682498367"} 2023-07-18 12:14:58,368 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-18 12:14:58,370 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-18 12:14:58,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 212 msec 2023-07-18 12:14:58,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-18 12:14:58,476 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-18 12:14:58,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-18 12:14:58,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,480 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_529096275' 2023-07-18 12:14:58,481 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:58,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:58,485 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:58,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 12:14:58,487 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/recovered.edits] 2023-07-18 12:14:58,492 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/recovered.edits/7.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e/recovered.edits/7.seqid 2023-07-18 12:14:58,493 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/GrouptestMultiTableMoveB/a02ce677f3940a04facb793bd9bbd80e 2023-07-18 12:14:58,493 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 12:14:58,495 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,497 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-18 12:14:58,499 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-18 12:14:58,501 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,501 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-18 12:14:58,501 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682498501"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:58,503 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 12:14:58,503 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a02ce677f3940a04facb793bd9bbd80e, NAME => 'GrouptestMultiTableMoveB,,1689682496070.a02ce677f3940a04facb793bd9bbd80e.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 12:14:58,503 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-18 12:14:58,503 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682498503"}]},"ts":"9223372036854775807"} 2023-07-18 12:14:58,505 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-18 12:14:58,508 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 12:14:58,510 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 31 msec 2023-07-18 12:14:58,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 12:14:58,588 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-18 12:14:58,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:58,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:58,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:58,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:14:58,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529096275 2023-07-18 12:14:58,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:58,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_529096275, current retry=0 2023-07-18 12:14:58,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509] are moved back to Group_testMultiTableMove_529096275 2023-07-18 12:14:58,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_529096275 => default 2023-07-18 12:14:58,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_529096275 2023-07-18 12:14:58,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:14:58,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:58,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:58,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:58,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:58,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:58,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:58,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:58,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:58,619 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:58,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:58,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:58,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:58,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:58,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 511 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683698631, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:58,632 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:58,634 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:58,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,635 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:58,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:58,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,662 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=514 (was 523), OpenFileDescriptor=811 (was 830), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 476), ProcessCount=176 (was 176), AvailableMemoryMB=2660 (was 2809) 2023-07-18 12:14:58,662 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 12:14:58,684 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=514, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=176, AvailableMemoryMB=2660 2023-07-18 12:14:58,684 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 12:14:58,685 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-18 12:14:58,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:58,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:58,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:58,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:58,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:58,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:58,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:58,707 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:58,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:58,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:58,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:58,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:58,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 539 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683698733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:58,734 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:58,736 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:58,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,738 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:58,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:58,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:58,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-18 12:14:58,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:58,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:58,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup oldGroup 2023-07-18 12:14:58,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:58,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 12:14:58,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to default 2023-07-18 12:14:58,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-18 12:14:58,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,790 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,790 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,796 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 12:14:58,796 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 12:14:58,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:58,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-18 12:14:58,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 12:14:58,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:14:58,818 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:58,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,828 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44567] to rsgroup anotherRSGroup 2023-07-18 12:14:58,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 12:14:58,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:14:58,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 12:14:58,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44567,1689682483625] are moved back to default 2023-07-18 12:14:58,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-18 12:14:58,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 12:14:58,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 12:14:58,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-18 12:14:58,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:51504 deadline: 1689683698867, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-18 12:14:58,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-18 12:14:58,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:51504 deadline: 1689683698885, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-18 12:14:58,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-18 12:14:58,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:51504 deadline: 1689683698887, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-18 12:14:58,891 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-18 12:14:58,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:51504 deadline: 1689683698891, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-18 12:14:58,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:58,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:58,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:58,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44567] to rsgroup default 2023-07-18 12:14:58,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 12:14:58,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:14:58,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-18 12:14:58,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44567,1689682483625] are moved back to anotherRSGroup 2023-07-18 12:14:58,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-18 12:14:58,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-18 12:14:58,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 12:14:58,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:58,924 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:58,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:58,924 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:58,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:14:58,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 12:14:58,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:58,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-18 12:14:58,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to oldGroup 2023-07-18 12:14:58,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-18 12:14:58,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,932 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-18 12:14:58,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:14:58,939 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:58,940 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:58,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:58,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:58,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:58,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:58,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:58,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:58,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:58,951 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:58,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:58,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:58,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:58,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:58,958 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:58,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:58,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:58,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] ipc.CallRunner(144): callId: 615 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683698970, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:58,971 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:58,973 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:58,974 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:58,974 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:58,974 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:58,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:58,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:58,999 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=518 (was 514) Potentially hanging thread: hconnection-0x120ad869-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=176 (was 176), AvailableMemoryMB=2677 (was 2660) - AvailableMemoryMB LEAK? - 2023-07-18 12:14:58,999 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-18 12:14:59,016 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=518, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=176, AvailableMemoryMB=2676 2023-07-18 12:14:59,016 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-18 12:14:59,016 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-18 12:14:59,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:59,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:59,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:14:59,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:14:59,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:14:59,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:14:59,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:59,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:14:59,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:59,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:14:59,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:14:59,033 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:14:59,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:14:59,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:59,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:59,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:14:59,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:59,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:59,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:59,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:14:59,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:14:59,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 643 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683699047, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:14:59,047 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:14:59,049 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:59,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:59,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:59,050 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:14:59,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:59,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:59,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:14:59,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:59,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-18 12:14:59,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:14:59,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:59,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:59,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:59,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:14:59,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:59,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:59,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup oldgroup 2023-07-18 12:14:59,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:14:59,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:59,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:59,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:59,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 12:14:59,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to default 2023-07-18 12:14:59,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-18 12:14:59,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:14:59,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:14:59,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:14:59,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 12:14:59,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:14:59,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:14:59,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-18 12:14:59,079 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:14:59,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-18 12:14:59,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-18 12:14:59,081 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:14:59,082 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:59,082 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:59,082 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:59,084 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:14:59,086 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,087 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/testRename/a094d11666c446d7944327b133b4e60c empty. 2023-07-18 12:14:59,087 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,087 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-18 12:14:59,103 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-18 12:14:59,104 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => a094d11666c446d7944327b133b4e60c, NAME => 'testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:14:59,116 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:59,116 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing a094d11666c446d7944327b133b4e60c, disabling compactions & flushes 2023-07-18 12:14:59,116 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,116 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,116 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. after waiting 0 ms 2023-07-18 12:14:59,116 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,116 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,116 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:14:59,118 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:14:59,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682499119"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682499119"}]},"ts":"1689682499119"} 2023-07-18 12:14:59,120 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:14:59,121 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:14:59,121 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682499121"}]},"ts":"1689682499121"} 2023-07-18 12:14:59,122 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-18 12:14:59,126 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:59,126 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:59,126 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:59,126 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:59,126 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, ASSIGN}] 2023-07-18 12:14:59,128 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, ASSIGN 2023-07-18 12:14:59,128 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:14:59,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-18 12:14:59,279 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:14:59,280 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:59,281 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682499280"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682499280"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682499280"}]},"ts":"1689682499280"} 2023-07-18 12:14:59,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:59,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-18 12:14:59,445 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a094d11666c446d7944327b133b4e60c, NAME => 'testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:14:59,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:14:59,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,448 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,450 DEBUG [StoreOpener-a094d11666c446d7944327b133b4e60c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/tr 2023-07-18 12:14:59,450 DEBUG [StoreOpener-a094d11666c446d7944327b133b4e60c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/tr 2023-07-18 12:14:59,450 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a094d11666c446d7944327b133b4e60c columnFamilyName tr 2023-07-18 12:14:59,451 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] regionserver.HStore(310): Store=a094d11666c446d7944327b133b4e60c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:14:59,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:14:59,463 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a094d11666c446d7944327b133b4e60c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10897942240, jitterRate=0.014949962496757507}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:14:59,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:14:59,465 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689682499076.a094d11666c446d7944327b133b4e60c., pid=113, masterSystemTime=1689682499437 2023-07-18 12:14:59,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,468 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:59,468 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682499468"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682499468"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682499468"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682499468"}]},"ts":"1689682499468"} 2023-07-18 12:14:59,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-18 12:14:59,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,44601,1689682479947 in 186 msec 2023-07-18 12:14:59,474 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-18 12:14:59,474 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, ASSIGN in 346 msec 2023-07-18 12:14:59,475 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:14:59,476 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682499475"}]},"ts":"1689682499475"} 2023-07-18 12:14:59,481 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-18 12:14:59,485 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:14:59,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 409 msec 2023-07-18 12:14:59,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-18 12:14:59,684 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-18 12:14:59,684 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-18 12:14:59,684 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:59,688 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-18 12:14:59,688 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:14:59,689 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-18 12:14:59,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-18 12:14:59,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:14:59,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:14:59,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:14:59,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:14:59,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-18 12:14:59,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region a094d11666c446d7944327b133b4e60c to RSGroup oldgroup 2023-07-18 12:14:59,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:14:59,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:14:59,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:14:59,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:14:59,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:14:59,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE 2023-07-18 12:14:59,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-18 12:14:59,698 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE 2023-07-18 12:14:59,699 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:14:59,699 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682499699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682499699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682499699"}]},"ts":"1689682499699"} 2023-07-18 12:14:59,703 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:14:59,825 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-18 12:14:59,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a094d11666c446d7944327b133b4e60c, disabling compactions & flushes 2023-07-18 12:14:59,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. after waiting 0 ms 2023-07-18 12:14:59,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:14:59,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:14:59,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:14:59,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a094d11666c446d7944327b133b4e60c move to jenkins-hbase4.apache.org,35237,1689682479509 record at close sequenceid=2 2023-07-18 12:14:59,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a094d11666c446d7944327b133b4e60c 2023-07-18 12:14:59,869 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=CLOSED 2023-07-18 12:14:59,869 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682499869"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682499869"}]},"ts":"1689682499869"} 2023-07-18 12:14:59,872 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-18 12:14:59,872 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,44601,1689682479947 in 168 msec 2023-07-18 12:14:59,873 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35237,1689682479509; forceNewPlan=false, retain=false 2023-07-18 12:15:00,023 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:00,024 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:00,024 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682500023"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682500023"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682500023"}]},"ts":"1689682500023"} 2023-07-18 12:15:00,026 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:15:00,183 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:00,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a094d11666c446d7944327b133b4e60c, NAME => 'testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:00,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:00,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,192 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,193 DEBUG [StoreOpener-a094d11666c446d7944327b133b4e60c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/tr 2023-07-18 12:15:00,193 DEBUG [StoreOpener-a094d11666c446d7944327b133b4e60c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/tr 2023-07-18 12:15:00,194 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a094d11666c446d7944327b133b4e60c columnFamilyName tr 2023-07-18 12:15:00,195 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] regionserver.HStore(310): Store=a094d11666c446d7944327b133b4e60c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:00,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:00,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a094d11666c446d7944327b133b4e60c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9673713760, jitterRate=-0.09906519949436188}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:00,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:15:00,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689682499076.a094d11666c446d7944327b133b4e60c., pid=116, masterSystemTime=1689682500178 2023-07-18 12:15:00,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:00,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:00,204 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:00,204 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682500204"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682500204"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682500204"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682500204"}]},"ts":"1689682500204"} 2023-07-18 12:15:00,207 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-18 12:15:00,207 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,35237,1689682479509 in 180 msec 2023-07-18 12:15:00,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE in 510 msec 2023-07-18 12:15:00,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-18 12:15:00,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-18 12:15:00,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:00,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:00,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:00,704 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:00,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 12:15:00,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:00,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 12:15:00,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:00,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 12:15:00,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:00,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:00,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:00,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-18 12:15:00,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:15:00,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:00,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:00,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:00,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:00,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:00,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:00,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:00,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44567] to rsgroup normal 2023-07-18 12:15:00,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:15:00,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:00,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:00,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:00,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:00,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 12:15:00,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44567,1689682483625] are moved back to default 2023-07-18 12:15:00,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-18 12:15:00,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:00,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:00,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:00,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 12:15:00,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:00,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:00,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-18 12:15:00,745 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:00,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-18 12:15:00,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 12:15:00,747 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:15:00,748 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:00,748 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:00,748 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:00,748 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:00,750 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:00,752 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:00,752 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 empty. 2023-07-18 12:15:00,753 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:00,753 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-18 12:15:00,769 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:00,770 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 72b81988a89d5bc06336b9b0a03ce7c9, NAME => 'unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:15:00,782 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:00,782 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 72b81988a89d5bc06336b9b0a03ce7c9, disabling compactions & flushes 2023-07-18 12:15:00,782 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:00,782 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:00,782 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. after waiting 0 ms 2023-07-18 12:15:00,782 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:00,782 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:00,782 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:00,784 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:00,785 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682500785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682500785"}]},"ts":"1689682500785"} 2023-07-18 12:15:00,787 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:00,787 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:00,788 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682500787"}]},"ts":"1689682500787"} 2023-07-18 12:15:00,789 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-18 12:15:00,791 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, ASSIGN}] 2023-07-18 12:15:00,793 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, ASSIGN 2023-07-18 12:15:00,794 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:15:00,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 12:15:00,945 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:00,946 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682500945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682500945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682500945"}]},"ts":"1689682500945"} 2023-07-18 12:15:00,947 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:01,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 12:15:01,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 72b81988a89d5bc06336b9b0a03ce7c9, NAME => 'unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:01,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:01,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,105 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,107 DEBUG [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/ut 2023-07-18 12:15:01,108 DEBUG [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/ut 2023-07-18 12:15:01,108 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 72b81988a89d5bc06336b9b0a03ce7c9 columnFamilyName ut 2023-07-18 12:15:01,110 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] regionserver.HStore(310): Store=72b81988a89d5bc06336b9b0a03ce7c9/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:01,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:01,125 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 72b81988a89d5bc06336b9b0a03ce7c9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11748431200, jitterRate=0.09415791928768158}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:01,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:01,126 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9., pid=119, masterSystemTime=1689682501099 2023-07-18 12:15:01,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,128 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,129 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:01,130 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682501129"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682501129"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682501129"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682501129"}]},"ts":"1689682501129"} 2023-07-18 12:15:01,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-18 12:15:01,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44601,1689682479947 in 184 msec 2023-07-18 12:15:01,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-18 12:15:01,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, ASSIGN in 342 msec 2023-07-18 12:15:01,137 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:01,137 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682501137"}]},"ts":"1689682501137"} 2023-07-18 12:15:01,138 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-18 12:15:01,144 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:01,145 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 406 msec 2023-07-18 12:15:01,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 12:15:01,350 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-18 12:15:01,350 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-18 12:15:01,350 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:01,353 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-18 12:15:01,353 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:01,354 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-18 12:15:01,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-18 12:15:01,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 12:15:01,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:01,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:01,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:01,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:01,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-18 12:15:01,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 72b81988a89d5bc06336b9b0a03ce7c9 to RSGroup normal 2023-07-18 12:15:01,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE 2023-07-18 12:15:01,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-18 12:15:01,361 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE 2023-07-18 12:15:01,361 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:01,362 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682501361"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682501361"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682501361"}]},"ts":"1689682501361"} 2023-07-18 12:15:01,363 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:01,516 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 72b81988a89d5bc06336b9b0a03ce7c9, disabling compactions & flushes 2023-07-18 12:15:01,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. after waiting 0 ms 2023-07-18 12:15:01,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:01,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:01,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 72b81988a89d5bc06336b9b0a03ce7c9 move to jenkins-hbase4.apache.org,44567,1689682483625 record at close sequenceid=2 2023-07-18 12:15:01,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,524 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=CLOSED 2023-07-18 12:15:01,525 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682501524"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682501524"}]},"ts":"1689682501524"} 2023-07-18 12:15:01,527 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-18 12:15:01,527 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44601,1689682479947 in 163 msec 2023-07-18 12:15:01,528 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:15:01,679 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:01,679 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682501679"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682501679"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682501679"}]},"ts":"1689682501679"} 2023-07-18 12:15:01,681 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:01,844 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 72b81988a89d5bc06336b9b0a03ce7c9, NAME => 'unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:01,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:01,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,848 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,849 DEBUG [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/ut 2023-07-18 12:15:01,849 DEBUG [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/ut 2023-07-18 12:15:01,850 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 72b81988a89d5bc06336b9b0a03ce7c9 columnFamilyName ut 2023-07-18 12:15:01,851 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] regionserver.HStore(310): Store=72b81988a89d5bc06336b9b0a03ce7c9/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:01,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:01,860 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 72b81988a89d5bc06336b9b0a03ce7c9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11537849760, jitterRate=0.07454599440097809}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:01,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:01,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9., pid=122, masterSystemTime=1689682501833 2023-07-18 12:15:01,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:01,864 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:01,864 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682501864"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682501864"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682501864"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682501864"}]},"ts":"1689682501864"} 2023-07-18 12:15:01,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-18 12:15:01,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44567,1689682483625 in 186 msec 2023-07-18 12:15:01,870 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE in 509 msec 2023-07-18 12:15:01,961 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-18 12:15:02,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-18 12:15:02,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-18 12:15:02,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:02,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:02,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:02,369 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:02,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 12:15:02,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:02,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 12:15:02,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:02,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 12:15:02,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:02,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-18 12:15:02,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:02,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:02,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:02,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:02,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-18 12:15:02,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-18 12:15:02,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:02,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:02,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-18 12:15:02,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:02,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 12:15:02,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:02,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 12:15:02,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:02,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:02,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:02,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-18 12:15:02,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:02,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:02,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:02,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:02,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:02,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-18 12:15:02,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region 72b81988a89d5bc06336b9b0a03ce7c9 to RSGroup default 2023-07-18 12:15:02,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE 2023-07-18 12:15:02,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 12:15:02,404 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE 2023-07-18 12:15:02,405 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:02,405 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682502405"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682502405"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682502405"}]},"ts":"1689682502405"} 2023-07-18 12:15:02,406 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:02,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 72b81988a89d5bc06336b9b0a03ce7c9, disabling compactions & flushes 2023-07-18 12:15:02,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. after waiting 0 ms 2023-07-18 12:15:02,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:15:02,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:02,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 72b81988a89d5bc06336b9b0a03ce7c9 move to jenkins-hbase4.apache.org,44601,1689682479947 record at close sequenceid=5 2023-07-18 12:15:02,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,568 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=CLOSED 2023-07-18 12:15:02,568 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682502568"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682502568"}]},"ts":"1689682502568"} 2023-07-18 12:15:02,571 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-18 12:15:02,571 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44567,1689682483625 in 164 msec 2023-07-18 12:15:02,572 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:15:02,722 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:02,723 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682502722"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682502722"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682502722"}]},"ts":"1689682502722"} 2023-07-18 12:15:02,724 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:02,880 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 72b81988a89d5bc06336b9b0a03ce7c9, NAME => 'unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:02,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:02,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,882 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,883 DEBUG [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/ut 2023-07-18 12:15:02,883 DEBUG [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/ut 2023-07-18 12:15:02,884 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 72b81988a89d5bc06336b9b0a03ce7c9 columnFamilyName ut 2023-07-18 12:15:02,885 INFO [StoreOpener-72b81988a89d5bc06336b9b0a03ce7c9-1] regionserver.HStore(310): Store=72b81988a89d5bc06336b9b0a03ce7c9/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:02,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:02,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 72b81988a89d5bc06336b9b0a03ce7c9; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10773932160, jitterRate=0.0034006237983703613}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:02,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:02,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9., pid=125, masterSystemTime=1689682502876 2023-07-18 12:15:02,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:02,893 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=72b81988a89d5bc06336b9b0a03ce7c9, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:02,893 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689682502893"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682502893"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682502893"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682502893"}]},"ts":"1689682502893"} 2023-07-18 12:15:02,896 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-18 12:15:02,896 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 72b81988a89d5bc06336b9b0a03ce7c9, server=jenkins-hbase4.apache.org,44601,1689682479947 in 170 msec 2023-07-18 12:15:02,897 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=72b81988a89d5bc06336b9b0a03ce7c9, REOPEN/MOVE in 493 msec 2023-07-18 12:15:03,403 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 12:15:03,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-18 12:15:03,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-18 12:15:03,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:03,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44567] to rsgroup default 2023-07-18 12:15:03,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 12:15:03,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:03,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:03,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:03,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:03,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-18 12:15:03,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,44567,1689682483625] are moved back to normal 2023-07-18 12:15:03,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-18 12:15:03,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:03,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-18 12:15:03,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:03,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:03,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:03,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 12:15:03,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:03,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:03,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:03,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:03,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:03,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:03,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:03,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:03,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:03,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:15:03,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:03,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-18 12:15:03,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:03,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:03,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:03,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-18 12:15:03,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(345): Moving region a094d11666c446d7944327b133b4e60c to RSGroup default 2023-07-18 12:15:03,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE 2023-07-18 12:15:03,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 12:15:03,439 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE 2023-07-18 12:15:03,440 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:03,440 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682503440"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682503440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682503440"}]},"ts":"1689682503440"} 2023-07-18 12:15:03,442 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,35237,1689682479509}] 2023-07-18 12:15:03,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a094d11666c446d7944327b133b4e60c, disabling compactions & flushes 2023-07-18 12:15:03,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. after waiting 0 ms 2023-07-18 12:15:03,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 12:15:03,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:15:03,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a094d11666c446d7944327b133b4e60c move to jenkins-hbase4.apache.org,44567,1689682483625 record at close sequenceid=5 2023-07-18 12:15:03,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,607 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=CLOSED 2023-07-18 12:15:03,607 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682503607"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682503607"}]},"ts":"1689682503607"} 2023-07-18 12:15:03,611 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-18 12:15:03,611 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,35237,1689682479509 in 167 msec 2023-07-18 12:15:03,611 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:15:03,762 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:03,762 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:03,762 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682503762"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682503762"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682503762"}]},"ts":"1689682503762"} 2023-07-18 12:15:03,764 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:03,920 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a094d11666c446d7944327b133b4e60c, NAME => 'testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:03,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:03,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,921 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,922 DEBUG [StoreOpener-a094d11666c446d7944327b133b4e60c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/tr 2023-07-18 12:15:03,922 DEBUG [StoreOpener-a094d11666c446d7944327b133b4e60c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/tr 2023-07-18 12:15:03,923 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a094d11666c446d7944327b133b4e60c columnFamilyName tr 2023-07-18 12:15:03,923 INFO [StoreOpener-a094d11666c446d7944327b133b4e60c-1] regionserver.HStore(310): Store=a094d11666c446d7944327b133b4e60c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:03,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:03,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a094d11666c446d7944327b133b4e60c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10415184000, jitterRate=-0.0300104022026062}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:03,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:15:03,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689682499076.a094d11666c446d7944327b133b4e60c., pid=128, masterSystemTime=1689682503915 2023-07-18 12:15:03,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:03,932 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a094d11666c446d7944327b133b4e60c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:03,932 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689682503932"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682503932"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682503932"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682503932"}]},"ts":"1689682503932"} 2023-07-18 12:15:03,934 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-18 12:15:03,935 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure a094d11666c446d7944327b133b4e60c, server=jenkins-hbase4.apache.org,44567,1689682483625 in 169 msec 2023-07-18 12:15:03,936 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=a094d11666c446d7944327b133b4e60c, REOPEN/MOVE in 497 msec 2023-07-18 12:15:04,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-18 12:15:04,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-18 12:15:04,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:04,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:15:04,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 12:15:04,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:04,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-18 12:15:04,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to newgroup 2023-07-18 12:15:04,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-18 12:15:04,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:04,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-18 12:15:04,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:04,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:04,459 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:04,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:04,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:04,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:04,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:15:04,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 763 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683704469, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:15:04,469 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:04,471 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:04,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,472 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:04,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:04,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,491 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=513 (was 518), OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 438) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=2458 (was 2676) 2023-07-18 12:15:04,491 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 12:15:04,510 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=513, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=176, AvailableMemoryMB=2458 2023-07-18 12:15:04,510 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-18 12:15:04,511 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-18 12:15:04,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:04,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:04,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:04,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:04,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:04,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:04,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:04,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:04,527 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:04,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:04,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:04,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:04,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:15:04,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 791 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683704540, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:15:04,541 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:04,542 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:04,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,543 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:04,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:04,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-18 12:15:04,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:04,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-18 12:15:04,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-18 12:15:04,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-18 12:15:04,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-18 12:15:04,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:51504 deadline: 1689683704552, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-18 12:15:04,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-18 12:15:04,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:51504 deadline: 1689683704554, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 12:15:04,556 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-18 12:15:04,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-18 12:15:04,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-18 12:15:04,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:51504 deadline: 1689683704560, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 12:15:04,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:04,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:04,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:04,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:04,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:04,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:04,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:04,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:04,576 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:04,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:04,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:04,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:04,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:15:04,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 834 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683704587, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:15:04,591 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:04,593 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:04,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,594 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:04,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:04,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,614 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=517 (was 513) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa4b4f0d-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=176 (was 176), AvailableMemoryMB=2457 (was 2458) 2023-07-18 12:15:04,614 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-18 12:15:04,634 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=517, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=176, AvailableMemoryMB=2457 2023-07-18 12:15:04,634 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-18 12:15:04,634 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-18 12:15:04,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:04,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:04,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:04,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:04,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:04,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:04,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:04,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:04,651 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:04,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:04,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:04,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:04,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:15:04,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:04,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 862 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683704674, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:15:04,674 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:04,676 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:04,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,677 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:04,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:04,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:04,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:04,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:04,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:04,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 12:15:04,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to default 2023-07-18 12:15:04,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:04,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:04,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:04,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:04,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:04,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:04,705 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:04,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-18 12:15:04,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-18 12:15:04,707 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:04,707 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1213895215 2023-07-18 12:15:04,707 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:04,708 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:04,709 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:04,713 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:04,713 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:04,713 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:04,713 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 2023-07-18 12:15:04,713 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:04,714 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d empty. 2023-07-18 12:15:04,714 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b empty. 2023-07-18 12:15:04,714 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 empty. 2023-07-18 12:15:04,714 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 empty. 2023-07-18 12:15:04,714 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 empty. 2023-07-18 12:15:04,715 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:04,716 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:04,717 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 2023-07-18 12:15:04,717 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:04,717 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:04,717 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 12:15:04,731 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:04,733 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => e06324b926346992a4f4b330fd782e9d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:15:04,733 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 97f6fbd91a4455f841e5d4284bd020a0, NAME => 'Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:15:04,733 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1fba925690d6ed72f87f2e3a3535c946, NAME => 'Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:15:04,759 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:04,759 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing e06324b926346992a4f4b330fd782e9d, disabling compactions & flushes 2023-07-18 12:15:04,759 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:04,760 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:04,760 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. after waiting 0 ms 2023-07-18 12:15:04,760 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:04,760 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:04,760 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for e06324b926346992a4f4b330fd782e9d: 2023-07-18 12:15:04,760 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 5022547219127c4fb176777611449868, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:15:04,761 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:04,761 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 1fba925690d6ed72f87f2e3a3535c946, disabling compactions & flushes 2023-07-18 12:15:04,761 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:04,761 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:04,761 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. after waiting 0 ms 2023-07-18 12:15:04,761 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:04,761 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:04,761 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 1fba925690d6ed72f87f2e3a3535c946: 2023-07-18 12:15:04,762 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => efb3c52d2d9300bee3cf75ecafb7a20b, NAME => 'Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp 2023-07-18 12:15:04,762 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:04,762 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 97f6fbd91a4455f841e5d4284bd020a0, disabling compactions & flushes 2023-07-18 12:15:04,762 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:04,762 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:04,762 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. after waiting 0 ms 2023-07-18 12:15:04,763 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:04,763 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:04,763 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 97f6fbd91a4455f841e5d4284bd020a0: 2023-07-18 12:15:04,775 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:04,775 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing efb3c52d2d9300bee3cf75ecafb7a20b, disabling compactions & flushes 2023-07-18 12:15:04,775 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:04,775 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:04,775 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. after waiting 0 ms 2023-07-18 12:15:04,775 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:04,775 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:04,775 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for efb3c52d2d9300bee3cf75ecafb7a20b: 2023-07-18 12:15:04,776 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:04,776 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 5022547219127c4fb176777611449868, disabling compactions & flushes 2023-07-18 12:15:04,776 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:04,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:04,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. after waiting 0 ms 2023-07-18 12:15:04,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:04,777 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:04,777 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 5022547219127c4fb176777611449868: 2023-07-18 12:15:04,779 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:04,780 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682504780"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682504780"}]},"ts":"1689682504780"} 2023-07-18 12:15:04,780 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682504780"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682504780"}]},"ts":"1689682504780"} 2023-07-18 12:15:04,780 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682504780"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682504780"}]},"ts":"1689682504780"} 2023-07-18 12:15:04,780 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682504780"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682504780"}]},"ts":"1689682504780"} 2023-07-18 12:15:04,780 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689682504701.5022547219127c4fb176777611449868.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682504780"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682504780"}]},"ts":"1689682504780"} 2023-07-18 12:15:04,782 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 12:15:04,783 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:04,783 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682504783"}]},"ts":"1689682504783"} 2023-07-18 12:15:04,784 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-18 12:15:04,788 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:04,788 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:04,788 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:04,788 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:04,789 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, ASSIGN}] 2023-07-18 12:15:04,791 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, ASSIGN 2023-07-18 12:15:04,791 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, ASSIGN 2023-07-18 12:15:04,791 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, ASSIGN 2023-07-18 12:15:04,791 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, ASSIGN 2023-07-18 12:15:04,792 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:15:04,792 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:15:04,792 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, ASSIGN 2023-07-18 12:15:04,792 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:15:04,792 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44601,1689682479947; forceNewPlan=false, retain=false 2023-07-18 12:15:04,793 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44567,1689682483625; forceNewPlan=false, retain=false 2023-07-18 12:15:04,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-18 12:15:04,942 INFO [jenkins-hbase4:36151] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 12:15:04,947 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=e06324b926346992a4f4b330fd782e9d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:04,947 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=efb3c52d2d9300bee3cf75ecafb7a20b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:04,947 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=1fba925690d6ed72f87f2e3a3535c946, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:04,947 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682504947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682504947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682504947"}]},"ts":"1689682504947"} 2023-07-18 12:15:04,947 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682504947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682504947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682504947"}]},"ts":"1689682504947"} 2023-07-18 12:15:04,947 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=5022547219127c4fb176777611449868, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:04,948 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689682504701.5022547219127c4fb176777611449868.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682504947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682504947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682504947"}]},"ts":"1689682504947"} 2023-07-18 12:15:04,947 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=97f6fbd91a4455f841e5d4284bd020a0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:04,949 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682504947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682504947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682504947"}]},"ts":"1689682504947"} 2023-07-18 12:15:04,947 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682504947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682504947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682504947"}]},"ts":"1689682504947"} 2023-07-18 12:15:04,950 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=134, state=RUNNABLE; OpenRegionProcedure efb3c52d2d9300bee3cf75ecafb7a20b, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:04,951 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=130, state=RUNNABLE; OpenRegionProcedure 1fba925690d6ed72f87f2e3a3535c946, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:04,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=133, state=RUNNABLE; OpenRegionProcedure 5022547219127c4fb176777611449868, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:04,952 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=131, state=RUNNABLE; OpenRegionProcedure 97f6fbd91a4455f841e5d4284bd020a0, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:04,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=132, state=RUNNABLE; OpenRegionProcedure e06324b926346992a4f4b330fd782e9d, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:05,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-18 12:15:05,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e06324b926346992a4f4b330fd782e9d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 12:15:05,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:05,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => efb3c52d2d9300bee3cf75ecafb7a20b, NAME => 'Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 12:15:05,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:05,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,116 INFO [StoreOpener-efb3c52d2d9300bee3cf75ecafb7a20b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,117 INFO [StoreOpener-e06324b926346992a4f4b330fd782e9d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,118 DEBUG [StoreOpener-efb3c52d2d9300bee3cf75ecafb7a20b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/f 2023-07-18 12:15:05,118 DEBUG [StoreOpener-efb3c52d2d9300bee3cf75ecafb7a20b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/f 2023-07-18 12:15:05,118 DEBUG [StoreOpener-e06324b926346992a4f4b330fd782e9d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/f 2023-07-18 12:15:05,118 DEBUG [StoreOpener-e06324b926346992a4f4b330fd782e9d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/f 2023-07-18 12:15:05,119 INFO [StoreOpener-efb3c52d2d9300bee3cf75ecafb7a20b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region efb3c52d2d9300bee3cf75ecafb7a20b columnFamilyName f 2023-07-18 12:15:05,119 INFO [StoreOpener-e06324b926346992a4f4b330fd782e9d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e06324b926346992a4f4b330fd782e9d columnFamilyName f 2023-07-18 12:15:05,119 INFO [StoreOpener-efb3c52d2d9300bee3cf75ecafb7a20b-1] regionserver.HStore(310): Store=efb3c52d2d9300bee3cf75ecafb7a20b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:05,119 INFO [StoreOpener-e06324b926346992a4f4b330fd782e9d-1] regionserver.HStore(310): Store=e06324b926346992a4f4b330fd782e9d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:05,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:05,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:05,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e06324b926346992a4f4b330fd782e9d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11270244960, jitterRate=0.04962335526943207}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:05,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e06324b926346992a4f4b330fd782e9d: 2023-07-18 12:15:05,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened efb3c52d2d9300bee3cf75ecafb7a20b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9685725600, jitterRate=-0.09794650971889496}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:05,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for efb3c52d2d9300bee3cf75ecafb7a20b: 2023-07-18 12:15:05,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b., pid=135, masterSystemTime=1689682505109 2023-07-18 12:15:05,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d., pid=139, masterSystemTime=1689682505110 2023-07-18 12:15:05,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1fba925690d6ed72f87f2e3a3535c946, NAME => 'Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 12:15:05,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:05,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,132 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=efb3c52d2d9300bee3cf75ecafb7a20b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:05,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,133 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682505132"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682505132"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682505132"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682505132"}]},"ts":"1689682505132"} 2023-07-18 12:15:05,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 97f6fbd91a4455f841e5d4284bd020a0, NAME => 'Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 12:15:05,133 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=e06324b926346992a4f4b330fd782e9d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,133 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505133"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682505133"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682505133"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682505133"}]},"ts":"1689682505133"} 2023-07-18 12:15:05,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:05,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,134 INFO [StoreOpener-1fba925690d6ed72f87f2e3a3535c946-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,135 INFO [StoreOpener-97f6fbd91a4455f841e5d4284bd020a0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,137 DEBUG [StoreOpener-1fba925690d6ed72f87f2e3a3535c946-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/f 2023-07-18 12:15:05,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=134 2023-07-18 12:15:05,137 DEBUG [StoreOpener-1fba925690d6ed72f87f2e3a3535c946-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/f 2023-07-18 12:15:05,137 DEBUG [StoreOpener-97f6fbd91a4455f841e5d4284bd020a0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/f 2023-07-18 12:15:05,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=134, state=SUCCESS; OpenRegionProcedure efb3c52d2d9300bee3cf75ecafb7a20b, server=jenkins-hbase4.apache.org,44567,1689682483625 in 185 msec 2023-07-18 12:15:05,137 DEBUG [StoreOpener-97f6fbd91a4455f841e5d4284bd020a0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/f 2023-07-18 12:15:05,138 INFO [StoreOpener-97f6fbd91a4455f841e5d4284bd020a0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 97f6fbd91a4455f841e5d4284bd020a0 columnFamilyName f 2023-07-18 12:15:05,138 INFO [StoreOpener-97f6fbd91a4455f841e5d4284bd020a0-1] regionserver.HStore(310): Store=97f6fbd91a4455f841e5d4284bd020a0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:05,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=132 2023-07-18 12:15:05,138 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, ASSIGN in 349 msec 2023-07-18 12:15:05,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=132, state=SUCCESS; OpenRegionProcedure e06324b926346992a4f4b330fd782e9d, server=jenkins-hbase4.apache.org,44601,1689682479947 in 183 msec 2023-07-18 12:15:05,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, ASSIGN in 350 msec 2023-07-18 12:15:05,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,142 INFO [StoreOpener-1fba925690d6ed72f87f2e3a3535c946-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1fba925690d6ed72f87f2e3a3535c946 columnFamilyName f 2023-07-18 12:15:05,143 INFO [StoreOpener-1fba925690d6ed72f87f2e3a3535c946-1] regionserver.HStore(310): Store=1fba925690d6ed72f87f2e3a3535c946/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:05,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:05,146 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 97f6fbd91a4455f841e5d4284bd020a0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9698560320, jitterRate=-0.09675118327140808}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:05,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 97f6fbd91a4455f841e5d4284bd020a0: 2023-07-18 12:15:05,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0., pid=138, masterSystemTime=1689682505110 2023-07-18 12:15:05,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5022547219127c4fb176777611449868, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 12:15:05,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5022547219127c4fb176777611449868 2023-07-18 12:15:05,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:05,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5022547219127c4fb176777611449868 2023-07-18 12:15:05,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5022547219127c4fb176777611449868 2023-07-18 12:15:05,150 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=97f6fbd91a4455f841e5d4284bd020a0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,150 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505150"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682505150"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682505150"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682505150"}]},"ts":"1689682505150"} 2023-07-18 12:15:05,159 INFO [StoreOpener-5022547219127c4fb176777611449868-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5022547219127c4fb176777611449868 2023-07-18 12:15:05,159 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=131 2023-07-18 12:15:05,159 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=131, state=SUCCESS; OpenRegionProcedure 97f6fbd91a4455f841e5d4284bd020a0, server=jenkins-hbase4.apache.org,44601,1689682479947 in 203 msec 2023-07-18 12:15:05,160 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, ASSIGN in 371 msec 2023-07-18 12:15:05,161 DEBUG [StoreOpener-5022547219127c4fb176777611449868-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/f 2023-07-18 12:15:05,161 DEBUG [StoreOpener-5022547219127c4fb176777611449868-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/f 2023-07-18 12:15:05,161 INFO [StoreOpener-5022547219127c4fb176777611449868-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5022547219127c4fb176777611449868 columnFamilyName f 2023-07-18 12:15:05,163 INFO [StoreOpener-5022547219127c4fb176777611449868-1] regionserver.HStore(310): Store=5022547219127c4fb176777611449868/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:05,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:05,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 2023-07-18 12:15:05,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 2023-07-18 12:15:05,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1fba925690d6ed72f87f2e3a3535c946; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10270611360, jitterRate=-0.04347477853298187}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:05,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1fba925690d6ed72f87f2e3a3535c946: 2023-07-18 12:15:05,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946., pid=136, masterSystemTime=1689682505109 2023-07-18 12:15:05,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5022547219127c4fb176777611449868 2023-07-18 12:15:05,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,170 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=1fba925690d6ed72f87f2e3a3535c946, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:05,170 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682505170"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682505170"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682505170"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682505170"}]},"ts":"1689682505170"} 2023-07-18 12:15:05,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:05,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5022547219127c4fb176777611449868; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11661114080, jitterRate=0.08602587878704071}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:05,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5022547219127c4fb176777611449868: 2023-07-18 12:15:05,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868., pid=137, masterSystemTime=1689682505110 2023-07-18 12:15:05,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=130 2023-07-18 12:15:05,176 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=5022547219127c4fb176777611449868, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=130, state=SUCCESS; OpenRegionProcedure 1fba925690d6ed72f87f2e3a3535c946, server=jenkins-hbase4.apache.org,44567,1689682483625 in 221 msec 2023-07-18 12:15:05,176 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689682504701.5022547219127c4fb176777611449868.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505176"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682505176"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682505176"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682505176"}]},"ts":"1689682505176"} 2023-07-18 12:15:05,178 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, ASSIGN in 388 msec 2023-07-18 12:15:05,180 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=133 2023-07-18 12:15:05,180 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=133, state=SUCCESS; OpenRegionProcedure 5022547219127c4fb176777611449868, server=jenkins-hbase4.apache.org,44601,1689682479947 in 226 msec 2023-07-18 12:15:05,182 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=129 2023-07-18 12:15:05,182 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, ASSIGN in 392 msec 2023-07-18 12:15:05,182 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:05,183 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682505183"}]},"ts":"1689682505183"} 2023-07-18 12:15:05,184 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-18 12:15:05,187 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:05,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 485 msec 2023-07-18 12:15:05,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-18 12:15:05,309 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-18 12:15:05,309 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-18 12:15:05,310 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:05,314 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-18 12:15:05,314 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:05,314 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-18 12:15:05,314 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:05,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 12:15:05,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:05,324 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 12:15:05,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 12:15:05,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-18 12:15:05,337 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682505337"}]},"ts":"1689682505337"} 2023-07-18 12:15:05,338 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-18 12:15:05,340 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-18 12:15:05,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, UNASSIGN}] 2023-07-18 12:15:05,343 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, UNASSIGN 2023-07-18 12:15:05,343 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, UNASSIGN 2023-07-18 12:15:05,343 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, UNASSIGN 2023-07-18 12:15:05,343 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, UNASSIGN 2023-07-18 12:15:05,344 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=5022547219127c4fb176777611449868, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,344 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, UNASSIGN 2023-07-18 12:15:05,344 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=1fba925690d6ed72f87f2e3a3535c946, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:05,344 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689682504701.5022547219127c4fb176777611449868.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505344"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682505344"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682505344"}]},"ts":"1689682505344"} 2023-07-18 12:15:05,344 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=97f6fbd91a4455f841e5d4284bd020a0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,344 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505344"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682505344"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682505344"}]},"ts":"1689682505344"} 2023-07-18 12:15:05,345 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=e06324b926346992a4f4b330fd782e9d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,345 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682505345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682505345"}]},"ts":"1689682505345"} 2023-07-18 12:15:05,344 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682505344"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682505344"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682505344"}]},"ts":"1689682505344"} 2023-07-18 12:15:05,346 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=efb3c52d2d9300bee3cf75ecafb7a20b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:05,346 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682505346"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682505346"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682505346"}]},"ts":"1689682505346"} 2023-07-18 12:15:05,346 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=144, state=RUNNABLE; CloseRegionProcedure 5022547219127c4fb176777611449868, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:05,347 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=142, state=RUNNABLE; CloseRegionProcedure 97f6fbd91a4455f841e5d4284bd020a0, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:05,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=143, state=RUNNABLE; CloseRegionProcedure e06324b926346992a4f4b330fd782e9d, server=jenkins-hbase4.apache.org,44601,1689682479947}] 2023-07-18 12:15:05,350 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=141, state=RUNNABLE; CloseRegionProcedure 1fba925690d6ed72f87f2e3a3535c946, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:05,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure efb3c52d2d9300bee3cf75ecafb7a20b, server=jenkins-hbase4.apache.org,44567,1689682483625}] 2023-07-18 12:15:05,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-18 12:15:05,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 97f6fbd91a4455f841e5d4284bd020a0, disabling compactions & flushes 2023-07-18 12:15:05,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. after waiting 0 ms 2023-07-18 12:15:05,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:05,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0. 2023-07-18 12:15:05,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 97f6fbd91a4455f841e5d4284bd020a0: 2023-07-18 12:15:05,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e06324b926346992a4f4b330fd782e9d, disabling compactions & flushes 2023-07-18 12:15:05,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. after waiting 0 ms 2023-07-18 12:15:05,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,513 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=97f6fbd91a4455f841e5d4284bd020a0, regionState=CLOSED 2023-07-18 12:15:05,513 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505513"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682505513"}]},"ts":"1689682505513"} 2023-07-18 12:15:05,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1fba925690d6ed72f87f2e3a3535c946, disabling compactions & flushes 2023-07-18 12:15:05,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. after waiting 0 ms 2023-07-18 12:15:05,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:05,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=142 2023-07-18 12:15:05,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=142, state=SUCCESS; CloseRegionProcedure 97f6fbd91a4455f841e5d4284bd020a0, server=jenkins-hbase4.apache.org,44601,1689682479947 in 168 msec 2023-07-18 12:15:05,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d. 2023-07-18 12:15:05,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e06324b926346992a4f4b330fd782e9d: 2023-07-18 12:15:05,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5022547219127c4fb176777611449868 2023-07-18 12:15:05,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5022547219127c4fb176777611449868, disabling compactions & flushes 2023-07-18 12:15:05,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. after waiting 0 ms 2023-07-18 12:15:05,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:05,526 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=97f6fbd91a4455f841e5d4284bd020a0, UNASSIGN in 179 msec 2023-07-18 12:15:05,526 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=e06324b926346992a4f4b330fd782e9d, regionState=CLOSED 2023-07-18 12:15:05,526 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505526"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682505526"}]},"ts":"1689682505526"} 2023-07-18 12:15:05,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946. 2023-07-18 12:15:05,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1fba925690d6ed72f87f2e3a3535c946: 2023-07-18 12:15:05,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing efb3c52d2d9300bee3cf75ecafb7a20b, disabling compactions & flushes 2023-07-18 12:15:05,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. after waiting 0 ms 2023-07-18 12:15:05,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:05,543 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=1fba925690d6ed72f87f2e3a3535c946, regionState=CLOSED 2023-07-18 12:15:05,543 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682505543"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682505543"}]},"ts":"1689682505543"} 2023-07-18 12:15:05,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868. 2023-07-18 12:15:05,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5022547219127c4fb176777611449868: 2023-07-18 12:15:05,544 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-18 12:15:05,544 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; CloseRegionProcedure e06324b926346992a4f4b330fd782e9d, server=jenkins-hbase4.apache.org,44601,1689682479947 in 180 msec 2023-07-18 12:15:05,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:05,545 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5022547219127c4fb176777611449868 2023-07-18 12:15:05,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b. 2023-07-18 12:15:05,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for efb3c52d2d9300bee3cf75ecafb7a20b: 2023-07-18 12:15:05,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e06324b926346992a4f4b330fd782e9d, UNASSIGN in 203 msec 2023-07-18 12:15:05,546 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=5022547219127c4fb176777611449868, regionState=CLOSED 2023-07-18 12:15:05,546 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689682504701.5022547219127c4fb176777611449868.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689682505546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682505546"}]},"ts":"1689682505546"} 2023-07-18 12:15:05,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,549 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=141 2023-07-18 12:15:05,549 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=efb3c52d2d9300bee3cf75ecafb7a20b, regionState=CLOSED 2023-07-18 12:15:05,549 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=141, state=SUCCESS; CloseRegionProcedure 1fba925690d6ed72f87f2e3a3535c946, server=jenkins-hbase4.apache.org,44567,1689682483625 in 195 msec 2023-07-18 12:15:05,549 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689682505549"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682505549"}]},"ts":"1689682505549"} 2023-07-18 12:15:05,551 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1fba925690d6ed72f87f2e3a3535c946, UNASSIGN in 208 msec 2023-07-18 12:15:05,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=144 2023-07-18 12:15:05,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=144, state=SUCCESS; CloseRegionProcedure 5022547219127c4fb176777611449868, server=jenkins-hbase4.apache.org,44601,1689682479947 in 202 msec 2023-07-18 12:15:05,554 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-18 12:15:05,554 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure efb3c52d2d9300bee3cf75ecafb7a20b, server=jenkins-hbase4.apache.org,44567,1689682483625 in 195 msec 2023-07-18 12:15:05,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5022547219127c4fb176777611449868, UNASSIGN in 212 msec 2023-07-18 12:15:05,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-18 12:15:05,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=efb3c52d2d9300bee3cf75ecafb7a20b, UNASSIGN in 213 msec 2023-07-18 12:15:05,556 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682505556"}]},"ts":"1689682505556"} 2023-07-18 12:15:05,558 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-18 12:15:05,560 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-18 12:15:05,562 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 236 msec 2023-07-18 12:15:05,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-18 12:15:05,639 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-18 12:15:05,639 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:05,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:05,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-18 12:15:05,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1213895215, current retry=0 2023-07-18 12:15:05,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1213895215. 2023-07-18 12:15:05,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:05,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 12:15:05,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:05,664 INFO [Listener at localhost/37687] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 12:15:05,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 12:15:05,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:05,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:51504 deadline: 1689682565664, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-18 12:15:05,665 DEBUG [Listener at localhost/37687] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-18 12:15:05,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-18 12:15:05,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,669 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1213895215' 2023-07-18 12:15:05,670 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:05,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:05,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-18 12:15:05,677 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,677 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,677 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 2023-07-18 12:15:05,677 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,677 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,679 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/recovered.edits] 2023-07-18 12:15:05,679 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/recovered.edits] 2023-07-18 12:15:05,680 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/recovered.edits] 2023-07-18 12:15:05,679 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/recovered.edits] 2023-07-18 12:15:05,680 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/f, FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/recovered.edits] 2023-07-18 12:15:05,689 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946/recovered.edits/4.seqid 2023-07-18 12:15:05,689 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b/recovered.edits/4.seqid 2023-07-18 12:15:05,690 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/1fba925690d6ed72f87f2e3a3535c946 2023-07-18 12:15:05,690 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d/recovered.edits/4.seqid 2023-07-18 12:15:05,691 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/efb3c52d2d9300bee3cf75ecafb7a20b 2023-07-18 12:15:05,691 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0/recovered.edits/4.seqid 2023-07-18 12:15:05,691 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/recovered.edits/4.seqid to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/archive/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868/recovered.edits/4.seqid 2023-07-18 12:15:05,691 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/e06324b926346992a4f4b330fd782e9d 2023-07-18 12:15:05,692 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/97f6fbd91a4455f841e5d4284bd020a0 2023-07-18 12:15:05,692 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/.tmp/data/default/Group_testDisabledTableMove/5022547219127c4fb176777611449868 2023-07-18 12:15:05,692 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 12:15:05,694 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,696 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-18 12:15:05,702 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-18 12:15:05,703 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,703 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-18 12:15:05,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682505703"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:05,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682505703"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:05,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682505703"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:05,704 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689682504701.5022547219127c4fb176777611449868.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682505703"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:05,704 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682505703"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:05,705 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 12:15:05,706 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1fba925690d6ed72f87f2e3a3535c946, NAME => 'Group_testDisabledTableMove,,1689682504701.1fba925690d6ed72f87f2e3a3535c946.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 97f6fbd91a4455f841e5d4284bd020a0, NAME => 'Group_testDisabledTableMove,aaaaa,1689682504701.97f6fbd91a4455f841e5d4284bd020a0.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e06324b926346992a4f4b330fd782e9d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689682504701.e06324b926346992a4f4b330fd782e9d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 5022547219127c4fb176777611449868, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689682504701.5022547219127c4fb176777611449868.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => efb3c52d2d9300bee3cf75ecafb7a20b, NAME => 'Group_testDisabledTableMove,zzzzz,1689682504701.efb3c52d2d9300bee3cf75ecafb7a20b.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 12:15:05,706 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-18 12:15:05,706 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682505706"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:05,707 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-18 12:15:05,709 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 12:15:05,711 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 43 msec 2023-07-18 12:15:05,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-18 12:15:05,776 INFO [Listener at localhost/37687] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-18 12:15:05,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:05,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:05,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:05,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:35237] to rsgroup default 2023-07-18 12:15:05,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:05,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:05,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1213895215, current retry=0 2023-07-18 12:15:05,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35237,1689682479509, jenkins-hbase4.apache.org,41985,1689682479721] are moved back to Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1213895215 => default 2023-07-18 12:15:05,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:05,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1213895215 2023-07-18 12:15:05,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:05,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:15:05,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:05,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:05,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:05,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:05,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:05,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:05,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:05,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:05,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:05,802 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:05,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:05,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:05,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:05,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:05,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:15:05,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:05,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683705811, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:15:05,812 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:05,813 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:05,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,814 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:05,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:05,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:05,833 INFO [Listener at localhost/37687] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=520 (was 517) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1026535489_17 at /127.0.0.1:44300 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x120ad869-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x497c82a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1311322825_17 at /127.0.0.1:43060 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=822 (was 811) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=176 (was 176), AvailableMemoryMB=2426 (was 2457) 2023-07-18 12:15:05,833 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-18 12:15:05,852 INFO [Listener at localhost/37687] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=520, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=176, AvailableMemoryMB=2426 2023-07-18 12:15:05,852 WARN [Listener at localhost/37687] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-18 12:15:05,852 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-18 12:15:05,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:05,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:05,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:05,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:05,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:05,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:05,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:05,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:05,873 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:05,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:05,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:05,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:05,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:05,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:05,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36151] to rsgroup master 2023-07-18 12:15:05,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:05,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51504 deadline: 1689683705884, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. 2023-07-18 12:15:05,885 WARN [Listener at localhost/37687] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36151 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:05,887 INFO [Listener at localhost/37687] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:05,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:05,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:05,888 INFO [Listener at localhost/37687] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35237, jenkins-hbase4.apache.org:41985, jenkins-hbase4.apache.org:44567, jenkins-hbase4.apache.org:44601], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:05,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:05,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36151] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:05,889 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 12:15:05,889 INFO [Listener at localhost/37687] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 12:15:05,889 DEBUG [Listener at localhost/37687] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e4d79c0 to 127.0.0.1:50805 2023-07-18 12:15:05,890 DEBUG [Listener at localhost/37687] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,893 DEBUG [Listener at localhost/37687] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 12:15:05,893 DEBUG [Listener at localhost/37687] util.JVMClusterUtil(257): Found active master hash=53875599, stopped=false 2023-07-18 12:15:05,893 DEBUG [Listener at localhost/37687] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 12:15:05,893 DEBUG [Listener at localhost/37687] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 12:15:05,893 INFO [Listener at localhost/37687] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:15:05,896 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:05,896 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:05,896 INFO [Listener at localhost/37687] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 12:15:05,896 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:05,896 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:05,896 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:05,896 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:05,897 DEBUG [Listener at localhost/37687] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c9cf855 to 127.0.0.1:50805 2023-07-18 12:15:05,897 DEBUG [Listener at localhost/37687] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,897 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:05,897 INFO [Listener at localhost/37687] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35237,1689682479509' ***** 2023-07-18 12:15:05,897 INFO [Listener at localhost/37687] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:05,897 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:05,897 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:05,897 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:05,897 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:05,897 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:05,899 INFO [Listener at localhost/37687] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41985,1689682479721' ***** 2023-07-18 12:15:05,904 INFO [Listener at localhost/37687] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:05,905 INFO [Listener at localhost/37687] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44601,1689682479947' ***** 2023-07-18 12:15:05,905 INFO [Listener at localhost/37687] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:05,905 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:05,906 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:05,906 INFO [Listener at localhost/37687] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44567,1689682483625' ***** 2023-07-18 12:15:05,907 INFO [Listener at localhost/37687] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:05,908 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:05,920 INFO [RS:1;jenkins-hbase4:41985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4d520d27{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:05,920 INFO [RS:0;jenkins-hbase4:35237] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2a1b55bd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:05,920 INFO [RS:2;jenkins-hbase4:44601] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36c7be16{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:05,920 INFO [RS:3;jenkins-hbase4:44567] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a482f9e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:05,925 INFO [RS:2;jenkins-hbase4:44601] server.AbstractConnector(383): Stopped ServerConnector@58b8c90a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:05,925 INFO [RS:0;jenkins-hbase4:35237] server.AbstractConnector(383): Stopped ServerConnector@4cab7999{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:05,925 INFO [RS:2;jenkins-hbase4:44601] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:05,925 INFO [RS:0;jenkins-hbase4:35237] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:05,925 INFO [RS:3;jenkins-hbase4:44567] server.AbstractConnector(383): Stopped ServerConnector@614a5820{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:05,926 INFO [RS:3;jenkins-hbase4:44567] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:05,927 INFO [RS:1;jenkins-hbase4:41985] server.AbstractConnector(383): Stopped ServerConnector@72b0dcfa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:05,927 INFO [RS:1;jenkins-hbase4:41985] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:05,929 INFO [RS:2;jenkins-hbase4:44601] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5526bfb1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:05,929 INFO [RS:1;jenkins-hbase4:41985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ec386b4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:05,930 INFO [RS:2;jenkins-hbase4:44601] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@477c886b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:05,929 INFO [RS:3;jenkins-hbase4:44567] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41ed43db{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:05,929 INFO [RS:0;jenkins-hbase4:35237] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a6a072{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:05,932 INFO [RS:3;jenkins-hbase4:44567] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@537ec0a8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:05,932 INFO [RS:1;jenkins-hbase4:41985] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2afce463{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:05,935 INFO [RS:0;jenkins-hbase4:35237] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@249e2011{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:05,936 INFO [RS:2;jenkins-hbase4:44601] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:05,936 INFO [RS:1;jenkins-hbase4:41985] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:05,936 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:05,936 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:05,937 INFO [RS:2;jenkins-hbase4:44601] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:05,937 INFO [RS:1;jenkins-hbase4:41985] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:05,937 INFO [RS:1;jenkins-hbase4:41985] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:05,938 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:15:05,938 DEBUG [RS:1;jenkins-hbase4:41985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0fb36c19 to 127.0.0.1:50805 2023-07-18 12:15:05,938 DEBUG [RS:1;jenkins-hbase4:41985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,938 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41985,1689682479721; all regions closed. 2023-07-18 12:15:05,938 INFO [RS:0;jenkins-hbase4:35237] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:05,939 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:05,937 INFO [RS:2;jenkins-hbase4:44601] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:05,939 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(3305): Received CLOSE for 521b60f74d0b1bace698944d2a6d3bba 2023-07-18 12:15:05,939 INFO [RS:0;jenkins-hbase4:35237] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:05,939 INFO [RS:0;jenkins-hbase4:35237] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:05,939 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:05,939 DEBUG [RS:0;jenkins-hbase4:35237] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x15bb4820 to 127.0.0.1:50805 2023-07-18 12:15:05,940 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(3305): Received CLOSE for c0115abc37809fbbb5bf11832155875e 2023-07-18 12:15:05,940 DEBUG [RS:0;jenkins-hbase4:35237] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,941 INFO [RS:3;jenkins-hbase4:44567] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:05,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 521b60f74d0b1bace698944d2a6d3bba, disabling compactions & flushes 2023-07-18 12:15:05,940 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(3305): Received CLOSE for 72b81988a89d5bc06336b9b0a03ce7c9 2023-07-18 12:15:05,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:15:05,941 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:05,941 INFO [RS:3;jenkins-hbase4:44567] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:05,941 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35237,1689682479509; all regions closed. 2023-07-18 12:15:05,942 INFO [RS:3;jenkins-hbase4:44567] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:05,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:15:05,942 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:05,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. after waiting 0 ms 2023-07-18 12:15:05,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:15:05,942 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(3305): Received CLOSE for a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:05,942 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:05,942 DEBUG [RS:3;jenkins-hbase4:44567] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65354fdb to 127.0.0.1:50805 2023-07-18 12:15:05,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 521b60f74d0b1bace698944d2a6d3bba 1/1 column families, dataSize=28.46 KB heapSize=46.80 KB 2023-07-18 12:15:05,947 DEBUG [RS:2;jenkins-hbase4:44601] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5739a2bd to 127.0.0.1:50805 2023-07-18 12:15:05,947 DEBUG [RS:3;jenkins-hbase4:44567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a094d11666c446d7944327b133b4e60c, disabling compactions & flushes 2023-07-18 12:15:05,948 DEBUG [RS:2;jenkins-hbase4:44601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,948 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 12:15:05,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:05,948 INFO [RS:2;jenkins-hbase4:44601] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:05,948 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1478): Online Regions={a094d11666c446d7944327b133b4e60c=testRename,,1689682499076.a094d11666c446d7944327b133b4e60c.} 2023-07-18 12:15:05,948 INFO [RS:2;jenkins-hbase4:44601] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:05,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:05,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. after waiting 0 ms 2023-07-18 12:15:05,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:05,948 INFO [RS:2;jenkins-hbase4:44601] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:05,949 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 12:15:05,949 DEBUG [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1504): Waiting on a094d11666c446d7944327b133b4e60c 2023-07-18 12:15:05,954 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 12:15:05,954 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1478): Online Regions={521b60f74d0b1bace698944d2a6d3bba=hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba., c0115abc37809fbbb5bf11832155875e=hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e., 1588230740=hbase:meta,,1.1588230740, 72b81988a89d5bc06336b9b0a03ce7c9=unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9.} 2023-07-18 12:15:05,955 DEBUG [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1504): Waiting on 1588230740, 521b60f74d0b1bace698944d2a6d3bba, 72b81988a89d5bc06336b9b0a03ce7c9, c0115abc37809fbbb5bf11832155875e 2023-07-18 12:15:05,955 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:15:05,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:15:05,955 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:15:05,955 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:15:05,955 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:15:05,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=36.31 KB heapSize=59.22 KB 2023-07-18 12:15:05,958 DEBUG [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:05,958 INFO [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35237%2C1689682479509.meta:.meta(num 1689682482106) 2023-07-18 12:15:05,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/testRename/a094d11666c446d7944327b133b4e60c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 12:15:05,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:05,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a094d11666c446d7944327b133b4e60c: 2023-07-18 12:15:05,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689682499076.a094d11666c446d7944327b133b4e60c. 2023-07-18 12:15:05,961 DEBUG [RS:1;jenkins-hbase4:41985] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:05,961 INFO [RS:1;jenkins-hbase4:41985] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41985%2C1689682479721:(num 1689682481926) 2023-07-18 12:15:05,961 DEBUG [RS:1;jenkins-hbase4:41985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,961 INFO [RS:1;jenkins-hbase4:41985] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:05,962 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:05,962 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:05,963 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:05,964 INFO [RS:1;jenkins-hbase4:41985] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:05,964 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:05,965 INFO [RS:1;jenkins-hbase4:41985] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:05,965 INFO [RS:1;jenkins-hbase4:41985] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:05,965 INFO [RS:1;jenkins-hbase4:41985] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:05,965 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:05,972 INFO [RS:1;jenkins-hbase4:41985] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41985 2023-07-18 12:15:05,973 DEBUG [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:05,973 INFO [RS:0;jenkins-hbase4:35237] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35237%2C1689682479509:(num 1689682481926) 2023-07-18 12:15:05,973 DEBUG [RS:0;jenkins-hbase4:35237] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:05,974 INFO [RS:0;jenkins-hbase4:35237] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:05,975 INFO [RS:0;jenkins-hbase4:35237] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:05,980 INFO [RS:0;jenkins-hbase4:35237] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:05,980 INFO [RS:0;jenkins-hbase4:35237] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:05,980 INFO [RS:0;jenkins-hbase4:35237] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:05,980 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:05,981 INFO [RS:0;jenkins-hbase4:35237] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35237 2023-07-18 12:15:05,990 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 12:15:05,990 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 12:15:05,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.46 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/.tmp/m/7d40c2e50e604665b402d31f5571c3a4 2023-07-18 12:15:05,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=33.39 KB at sequenceid=208 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/info/3b1800e1f1254feb96b7921011e0c358 2023-07-18 12:15:06,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d40c2e50e604665b402d31f5571c3a4 2023-07-18 12:15:06,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/.tmp/m/7d40c2e50e604665b402d31f5571c3a4 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/m/7d40c2e50e604665b402d31f5571c3a4 2023-07-18 12:15:06,007 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3b1800e1f1254feb96b7921011e0c358 2023-07-18 12:15:06,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d40c2e50e604665b402d31f5571c3a4 2023-07-18 12:15:06,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/m/7d40c2e50e604665b402d31f5571c3a4, entries=28, sequenceid=95, filesize=6.1 K 2023-07-18 12:15:06,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.46 KB/29141, heapSize ~46.79 KB/47912, currentSize=0 B/0 for 521b60f74d0b1bace698944d2a6d3bba in 71ms, sequenceid=95, compaction requested=false 2023-07-18 12:15:06,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/rsgroup/521b60f74d0b1bace698944d2a6d3bba/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-18 12:15:06,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:06,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:15:06,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 521b60f74d0b1bace698944d2a6d3bba: 2023-07-18 12:15:06,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689682482659.521b60f74d0b1bace698944d2a6d3bba. 2023-07-18 12:15:06,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c0115abc37809fbbb5bf11832155875e, disabling compactions & flushes 2023-07-18 12:15:06,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:15:06,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:15:06,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. after waiting 0 ms 2023-07-18 12:15:06,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:15:06,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c0115abc37809fbbb5bf11832155875e 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-18 12:15:06,027 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=208 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/rep_barrier/7bd76c2679cc491a98666c4f4441787f 2023-07-18 12:15:06,033 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7bd76c2679cc491a98666c4f4441787f 2023-07-18 12:15:06,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/.tmp/info/4a0a64cbeb6e46698d5610b4f8e2571f 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41985,1689682479721 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:06,044 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35237,1689682479509 2023-07-18 12:15:06,048 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35237,1689682479509] 2023-07-18 12:15:06,048 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35237,1689682479509; numProcessing=1 2023-07-18 12:15:06,048 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/.tmp/info/4a0a64cbeb6e46698d5610b4f8e2571f as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/info/4a0a64cbeb6e46698d5610b4f8e2571f 2023-07-18 12:15:06,049 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=208 (bloomFilter=false), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/table/c149a074f4534137b46d4327a8c6316a 2023-07-18 12:15:06,055 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c149a074f4534137b46d4327a8c6316a 2023-07-18 12:15:06,057 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/info/3b1800e1f1254feb96b7921011e0c358 as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/3b1800e1f1254feb96b7921011e0c358 2023-07-18 12:15:06,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/info/4a0a64cbeb6e46698d5610b4f8e2571f, entries=2, sequenceid=6, filesize=4.8 K 2023-07-18 12:15:06,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c0115abc37809fbbb5bf11832155875e in 33ms, sequenceid=6, compaction requested=false 2023-07-18 12:15:06,065 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/namespace/c0115abc37809fbbb5bf11832155875e/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-18 12:15:06,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:15:06,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c0115abc37809fbbb5bf11832155875e: 2023-07-18 12:15:06,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689682482350.c0115abc37809fbbb5bf11832155875e. 2023-07-18 12:15:06,068 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3b1800e1f1254feb96b7921011e0c358 2023-07-18 12:15:06,069 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/info/3b1800e1f1254feb96b7921011e0c358, entries=52, sequenceid=208, filesize=10.7 K 2023-07-18 12:15:06,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 72b81988a89d5bc06336b9b0a03ce7c9, disabling compactions & flushes 2023-07-18 12:15:06,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:06,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:06,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. after waiting 0 ms 2023-07-18 12:15:06,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:06,070 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/rep_barrier/7bd76c2679cc491a98666c4f4441787f as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier/7bd76c2679cc491a98666c4f4441787f 2023-07-18 12:15:06,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/default/unmovedTable/72b81988a89d5bc06336b9b0a03ce7c9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 12:15:06,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:06,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 72b81988a89d5bc06336b9b0a03ce7c9: 2023-07-18 12:15:06,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689682500738.72b81988a89d5bc06336b9b0a03ce7c9. 2023-07-18 12:15:06,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7bd76c2679cc491a98666c4f4441787f 2023-07-18 12:15:06,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/rep_barrier/7bd76c2679cc491a98666c4f4441787f, entries=8, sequenceid=208, filesize=5.8 K 2023-07-18 12:15:06,080 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/.tmp/table/c149a074f4534137b46d4327a8c6316a as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/c149a074f4534137b46d4327a8c6316a 2023-07-18 12:15:06,087 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c149a074f4534137b46d4327a8c6316a 2023-07-18 12:15:06,087 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/table/c149a074f4534137b46d4327a8c6316a, entries=16, sequenceid=208, filesize=6.0 K 2023-07-18 12:15:06,088 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~36.31 KB/37186, heapSize ~59.17 KB/60592, currentSize=0 B/0 for 1588230740 in 133ms, sequenceid=208, compaction requested=true 2023-07-18 12:15:06,088 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 12:15:06,104 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/data/hbase/meta/1588230740/recovered.edits/211.seqid, newMaxSeqId=211, maxSeqId=99 2023-07-18 12:15:06,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:06,105 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:06,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:15:06,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:06,148 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35237,1689682479509 already deleted, retry=false 2023-07-18 12:15:06,148 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35237,1689682479509 expired; onlineServers=3 2023-07-18 12:15:06,148 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41985,1689682479721] 2023-07-18 12:15:06,148 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41985,1689682479721; numProcessing=2 2023-07-18 12:15:06,149 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44567,1689682483625; all regions closed. 2023-07-18 12:15:06,155 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44601,1689682479947; all regions closed. 2023-07-18 12:15:06,155 DEBUG [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:06,155 INFO [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44567%2C1689682483625.meta:.meta(num 1689682484945) 2023-07-18 12:15:06,162 DEBUG [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:06,162 INFO [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44601%2C1689682479947.meta:.meta(num 1689682491799) 2023-07-18 12:15:06,163 DEBUG [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:06,163 INFO [RS:3;jenkins-hbase4:44567] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44567%2C1689682483625:(num 1689682484130) 2023-07-18 12:15:06,163 DEBUG [RS:3;jenkins-hbase4:44567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:06,163 INFO [RS:3;jenkins-hbase4:44567] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:06,163 INFO [RS:3;jenkins-hbase4:44567] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:06,163 INFO [RS:3;jenkins-hbase4:44567] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:06,164 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:06,164 INFO [RS:3;jenkins-hbase4:44567] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:06,164 INFO [RS:3;jenkins-hbase4:44567] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:06,166 INFO [RS:3;jenkins-hbase4:44567] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44567 2023-07-18 12:15:06,170 DEBUG [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/oldWALs 2023-07-18 12:15:06,170 INFO [RS:2;jenkins-hbase4:44601] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44601%2C1689682479947:(num 1689682481926) 2023-07-18 12:15:06,170 DEBUG [RS:2;jenkins-hbase4:44601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:06,170 INFO [RS:2;jenkins-hbase4:44601] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:06,170 INFO [RS:2;jenkins-hbase4:44601] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:06,170 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:06,171 INFO [RS:2;jenkins-hbase4:44601] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44601 2023-07-18 12:15:06,193 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,193 INFO [RS:0;jenkins-hbase4:35237] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35237,1689682479509; zookeeper connection closed. 2023-07-18 12:15:06,193 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:35237-0x101785affaa0001, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,194 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c51c929] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c51c929 2023-07-18 12:15:06,247 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,247 INFO [RS:1;jenkins-hbase4:41985] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41985,1689682479721; zookeeper connection closed. 2023-07-18 12:15:06,247 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:41985-0x101785affaa0002, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,247 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@554797ef] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@554797ef 2023-07-18 12:15:06,248 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:06,248 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41985,1689682479721 already deleted, retry=false 2023-07-18 12:15:06,248 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:06,248 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41985,1689682479721 expired; onlineServers=2 2023-07-18 12:15:06,248 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:06,249 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44567,1689682483625] 2023-07-18 12:15:06,249 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44567,1689682483625; numProcessing=3 2023-07-18 12:15:06,249 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44601,1689682479947 2023-07-18 12:15:06,249 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44567,1689682483625 2023-07-18 12:15:06,251 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44567,1689682483625 already deleted, retry=false 2023-07-18 12:15:06,252 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44567,1689682483625 expired; onlineServers=1 2023-07-18 12:15:06,252 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44601,1689682479947] 2023-07-18 12:15:06,252 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44601,1689682479947; numProcessing=4 2023-07-18 12:15:06,351 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,351 INFO [RS:3;jenkins-hbase4:44567] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44567,1689682483625; zookeeper connection closed. 2023-07-18 12:15:06,351 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44567-0x101785affaa000b, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,352 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2c391b17] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2c391b17 2023-07-18 12:15:06,352 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44601,1689682479947 already deleted, retry=false 2023-07-18 12:15:06,352 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44601,1689682479947 expired; onlineServers=0 2023-07-18 12:15:06,352 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36151,1689682477215' ***** 2023-07-18 12:15:06,352 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 12:15:06,353 DEBUG [M:0;jenkins-hbase4:36151] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@420b19b4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:06,353 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:06,356 INFO [M:0;jenkins-hbase4:36151] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38aa31da{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 12:15:06,356 INFO [M:0;jenkins-hbase4:36151] server.AbstractConnector(383): Stopped ServerConnector@7e43481b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:06,356 INFO [M:0;jenkins-hbase4:36151] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:06,356 INFO [M:0;jenkins-hbase4:36151] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@680fffdc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:06,357 INFO [M:0;jenkins-hbase4:36151] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38bddd36{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:06,357 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:06,357 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:06,357 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36151,1689682477215 2023-07-18 12:15:06,357 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36151,1689682477215; all regions closed. 2023-07-18 12:15:06,357 DEBUG [M:0;jenkins-hbase4:36151] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:06,357 INFO [M:0;jenkins-hbase4:36151] master.HMaster(1491): Stopping master jetty server 2023-07-18 12:15:06,358 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:06,358 INFO [M:0;jenkins-hbase4:36151] server.AbstractConnector(383): Stopped ServerConnector@44cc3892{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:06,358 DEBUG [M:0;jenkins-hbase4:36151] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 12:15:06,359 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 12:15:06,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682481527] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682481527,5,FailOnTimeoutGroup] 2023-07-18 12:15:06,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682481528] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682481528,5,FailOnTimeoutGroup] 2023-07-18 12:15:06,359 DEBUG [M:0;jenkins-hbase4:36151] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 12:15:06,359 INFO [M:0;jenkins-hbase4:36151] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 12:15:06,359 INFO [M:0;jenkins-hbase4:36151] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 12:15:06,359 INFO [M:0;jenkins-hbase4:36151] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 12:15:06,359 DEBUG [M:0;jenkins-hbase4:36151] master.HMaster(1512): Stopping service threads 2023-07-18 12:15:06,359 INFO [M:0;jenkins-hbase4:36151] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 12:15:06,359 ERROR [M:0;jenkins-hbase4:36151] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-18 12:15:06,360 INFO [M:0;jenkins-hbase4:36151] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 12:15:06,360 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 12:15:06,361 DEBUG [M:0;jenkins-hbase4:36151] zookeeper.ZKUtil(398): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 12:15:06,361 WARN [M:0;jenkins-hbase4:36151] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 12:15:06,361 INFO [M:0;jenkins-hbase4:36151] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 12:15:06,361 INFO [M:0;jenkins-hbase4:36151] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 12:15:06,361 DEBUG [M:0;jenkins-hbase4:36151] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 12:15:06,361 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:06,361 DEBUG [M:0;jenkins-hbase4:36151] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:06,361 DEBUG [M:0;jenkins-hbase4:36151] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 12:15:06,361 DEBUG [M:0;jenkins-hbase4:36151] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:06,361 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=509.28 KB heapSize=609.26 KB 2023-07-18 12:15:06,379 INFO [M:0;jenkins-hbase4:36151] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=509.28 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b1c0ba12a37b4cecb41702bc66ded4bf 2023-07-18 12:15:06,386 DEBUG [M:0;jenkins-hbase4:36151] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b1c0ba12a37b4cecb41702bc66ded4bf as hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b1c0ba12a37b4cecb41702bc66ded4bf 2023-07-18 12:15:06,392 INFO [M:0;jenkins-hbase4:36151] regionserver.HStore(1080): Added hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b1c0ba12a37b4cecb41702bc66ded4bf, entries=151, sequenceid=1128, filesize=26.6 K 2023-07-18 12:15:06,393 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegion(2948): Finished flush of dataSize ~509.28 KB/521504, heapSize ~609.24 KB/623864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=1128, compaction requested=false 2023-07-18 12:15:06,395 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:06,395 DEBUG [M:0;jenkins-hbase4:36151] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:06,400 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:06,400 INFO [M:0;jenkins-hbase4:36151] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 12:15:06,400 INFO [M:0;jenkins-hbase4:36151] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36151 2023-07-18 12:15:06,402 DEBUG [M:0;jenkins-hbase4:36151] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36151,1689682477215 already deleted, retry=false 2023-07-18 12:15:06,494 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,494 INFO [RS:2;jenkins-hbase4:44601] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44601,1689682479947; zookeeper connection closed. 2023-07-18 12:15:06,494 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): regionserver:44601-0x101785affaa0003, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,495 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7738e1df] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7738e1df 2023-07-18 12:15:06,495 INFO [Listener at localhost/37687] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 12:15:06,594 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,594 INFO [M:0;jenkins-hbase4:36151] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36151,1689682477215; zookeeper connection closed. 2023-07-18 12:15:06,594 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): master:36151-0x101785affaa0000, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:06,596 WARN [Listener at localhost/37687] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:06,601 INFO [Listener at localhost/37687] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:06,705 WARN [BP-1681315234-172.31.14.131-1689682473336 heartbeating to localhost/127.0.0.1:46497] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:06,705 WARN [BP-1681315234-172.31.14.131-1689682473336 heartbeating to localhost/127.0.0.1:46497] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1681315234-172.31.14.131-1689682473336 (Datanode Uuid b9d02080-3f04-4581-8baf-f681a8a8cfcf) service to localhost/127.0.0.1:46497 2023-07-18 12:15:06,707 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data5/current/BP-1681315234-172.31.14.131-1689682473336] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:06,707 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data6/current/BP-1681315234-172.31.14.131-1689682473336] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:06,709 WARN [Listener at localhost/37687] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:06,716 INFO [Listener at localhost/37687] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:06,819 WARN [BP-1681315234-172.31.14.131-1689682473336 heartbeating to localhost/127.0.0.1:46497] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:06,819 WARN [BP-1681315234-172.31.14.131-1689682473336 heartbeating to localhost/127.0.0.1:46497] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1681315234-172.31.14.131-1689682473336 (Datanode Uuid 69068bff-c55e-463e-91c5-67412dc24480) service to localhost/127.0.0.1:46497 2023-07-18 12:15:06,820 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data3/current/BP-1681315234-172.31.14.131-1689682473336] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:06,820 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data4/current/BP-1681315234-172.31.14.131-1689682473336] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:06,822 WARN [Listener at localhost/37687] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:06,825 INFO [Listener at localhost/37687] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:06,929 WARN [BP-1681315234-172.31.14.131-1689682473336 heartbeating to localhost/127.0.0.1:46497] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:06,929 WARN [BP-1681315234-172.31.14.131-1689682473336 heartbeating to localhost/127.0.0.1:46497] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1681315234-172.31.14.131-1689682473336 (Datanode Uuid 4562afda-89af-40f7-b2a1-8b6da745d53c) service to localhost/127.0.0.1:46497 2023-07-18 12:15:06,930 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data1/current/BP-1681315234-172.31.14.131-1689682473336] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:06,931 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/cluster_08cba555-bad0-f649-b1d1-80d4006ed299/dfs/data/data2/current/BP-1681315234-172.31.14.131-1689682473336] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:06,968 INFO [Listener at localhost/37687] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:07,092 INFO [Listener at localhost/37687] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 12:15:07,155 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 12:15:07,155 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 12:15:07,155 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.log.dir so I do NOT create it in target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459 2023-07-18 12:15:07,155 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea471fd-f3d3-6f93-e830-12c509f24e8d/hadoop.tmp.dir so I do NOT create it in target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459 2023-07-18 12:15:07,155 INFO [Listener at localhost/37687] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5, deleteOnExit=true 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/test.cache.data in system properties and HBase conf 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir in system properties and HBase conf 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 12:15:07,156 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 12:15:07,156 DEBUG [Listener at localhost/37687] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 12:15:07,157 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 12:15:07,158 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 12:15:07,158 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/nfs.dump.dir in system properties and HBase conf 2023-07-18 12:15:07,158 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir in system properties and HBase conf 2023-07-18 12:15:07,158 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 12:15:07,158 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 12:15:07,158 INFO [Listener at localhost/37687] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 12:15:07,163 WARN [Listener at localhost/37687] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 12:15:07,163 WARN [Listener at localhost/37687] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 12:15:07,189 DEBUG [Listener at localhost/37687-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101785affaa000a, quorum=127.0.0.1:50805, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 12:15:07,189 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101785affaa000a, quorum=127.0.0.1:50805, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 12:15:07,212 WARN [Listener at localhost/37687] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:07,215 INFO [Listener at localhost/37687] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:07,219 INFO [Listener at localhost/37687] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/Jetty_localhost_38483_hdfs____.q02ox8/webapp 2023-07-18 12:15:07,319 INFO [Listener at localhost/37687] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38483 2023-07-18 12:15:07,323 WARN [Listener at localhost/37687] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 12:15:07,324 WARN [Listener at localhost/37687] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 12:15:07,370 WARN [Listener at localhost/42421] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:07,389 WARN [Listener at localhost/42421] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:15:07,391 WARN [Listener at localhost/42421] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:07,393 INFO [Listener at localhost/42421] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:07,397 INFO [Listener at localhost/42421] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/Jetty_localhost_34625_datanode____.5fpvdt/webapp 2023-07-18 12:15:07,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:07,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 12:15:07,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 12:15:07,494 INFO [Listener at localhost/42421] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34625 2023-07-18 12:15:07,502 WARN [Listener at localhost/43973] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:07,521 WARN [Listener at localhost/43973] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:15:07,523 WARN [Listener at localhost/43973] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:07,524 INFO [Listener at localhost/43973] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:07,528 INFO [Listener at localhost/43973] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/Jetty_localhost_34431_datanode____qs0r20/webapp 2023-07-18 12:15:07,628 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a5842e096755548: Processing first storage report for DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb from datanode 43a7d7db-04e6-4b5c-958e-03e13bb064a0 2023-07-18 12:15:07,628 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a5842e096755548: from storage DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb node DatanodeRegistration(127.0.0.1:36165, datanodeUuid=43a7d7db-04e6-4b5c-958e-03e13bb064a0, infoPort=39099, infoSecurePort=0, ipcPort=43973, storageInfo=lv=-57;cid=testClusterID;nsid=1441449109;c=1689682507166), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:07,629 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a5842e096755548: Processing first storage report for DS-0746397b-7a6c-4e19-9b4f-924246a81db3 from datanode 43a7d7db-04e6-4b5c-958e-03e13bb064a0 2023-07-18 12:15:07,629 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a5842e096755548: from storage DS-0746397b-7a6c-4e19-9b4f-924246a81db3 node DatanodeRegistration(127.0.0.1:36165, datanodeUuid=43a7d7db-04e6-4b5c-958e-03e13bb064a0, infoPort=39099, infoSecurePort=0, ipcPort=43973, storageInfo=lv=-57;cid=testClusterID;nsid=1441449109;c=1689682507166), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:07,646 INFO [Listener at localhost/43973] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34431 2023-07-18 12:15:07,657 WARN [Listener at localhost/33675] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:07,682 WARN [Listener at localhost/33675] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:15:07,684 WARN [Listener at localhost/33675] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:07,685 INFO [Listener at localhost/33675] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:07,688 INFO [Listener at localhost/33675] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/Jetty_localhost_38197_datanode____.j0cjgv/webapp 2023-07-18 12:15:07,781 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x69a730a27913c81a: Processing first storage report for DS-a331fdf3-c1ee-43df-aac6-e512da806c0b from datanode 7591d0cf-5049-4bdf-b641-ce854d71acd8 2023-07-18 12:15:07,782 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x69a730a27913c81a: from storage DS-a331fdf3-c1ee-43df-aac6-e512da806c0b node DatanodeRegistration(127.0.0.1:36899, datanodeUuid=7591d0cf-5049-4bdf-b641-ce854d71acd8, infoPort=36995, infoSecurePort=0, ipcPort=33675, storageInfo=lv=-57;cid=testClusterID;nsid=1441449109;c=1689682507166), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 12:15:07,782 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x69a730a27913c81a: Processing first storage report for DS-c20fbb11-703e-4ba3-83a9-547516057d5e from datanode 7591d0cf-5049-4bdf-b641-ce854d71acd8 2023-07-18 12:15:07,782 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x69a730a27913c81a: from storage DS-c20fbb11-703e-4ba3-83a9-547516057d5e node DatanodeRegistration(127.0.0.1:36899, datanodeUuid=7591d0cf-5049-4bdf-b641-ce854d71acd8, infoPort=36995, infoSecurePort=0, ipcPort=33675, storageInfo=lv=-57;cid=testClusterID;nsid=1441449109;c=1689682507166), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:07,830 INFO [Listener at localhost/33675] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38197 2023-07-18 12:15:07,842 WARN [Listener at localhost/34965] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:07,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7aa446a392d958e9: Processing first storage report for DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9 from datanode c88077f7-c9ed-49a3-a554-c09b2623d890 2023-07-18 12:15:07,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7aa446a392d958e9: from storage DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9 node DatanodeRegistration(127.0.0.1:44219, datanodeUuid=c88077f7-c9ed-49a3-a554-c09b2623d890, infoPort=46771, infoSecurePort=0, ipcPort=34965, storageInfo=lv=-57;cid=testClusterID;nsid=1441449109;c=1689682507166), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:07,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7aa446a392d958e9: Processing first storage report for DS-4401e9c9-5b85-4d49-a4b5-0f16cdd400c7 from datanode c88077f7-c9ed-49a3-a554-c09b2623d890 2023-07-18 12:15:07,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7aa446a392d958e9: from storage DS-4401e9c9-5b85-4d49-a4b5-0f16cdd400c7 node DatanodeRegistration(127.0.0.1:44219, datanodeUuid=c88077f7-c9ed-49a3-a554-c09b2623d890, infoPort=46771, infoSecurePort=0, ipcPort=34965, storageInfo=lv=-57;cid=testClusterID;nsid=1441449109;c=1689682507166), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:07,959 DEBUG [Listener at localhost/34965] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459 2023-07-18 12:15:07,962 INFO [Listener at localhost/34965] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/zookeeper_0, clientPort=65201, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 12:15:07,964 INFO [Listener at localhost/34965] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=65201 2023-07-18 12:15:07,964 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:07,965 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:07,987 INFO [Listener at localhost/34965] util.FSUtils(471): Created version file at hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113 with version=8 2023-07-18 12:15:07,988 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/hbase-staging 2023-07-18 12:15:07,989 DEBUG [Listener at localhost/34965] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 12:15:07,989 DEBUG [Listener at localhost/34965] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 12:15:07,989 DEBUG [Listener at localhost/34965] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 12:15:07,989 DEBUG [Listener at localhost/34965] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 12:15:07,990 INFO [Listener at localhost/34965] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:07,990 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:07,990 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:07,990 INFO [Listener at localhost/34965] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:07,991 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:07,991 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:07,991 INFO [Listener at localhost/34965] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:07,992 INFO [Listener at localhost/34965] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35371 2023-07-18 12:15:07,993 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:07,994 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:07,996 INFO [Listener at localhost/34965] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35371 connecting to ZooKeeper ensemble=127.0.0.1:65201 2023-07-18 12:15:08,004 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:353710x0, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:08,004 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35371-0x101785b7bbc0000 connected 2023-07-18 12:15:08,024 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:08,025 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:08,025 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:08,026 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35371 2023-07-18 12:15:08,031 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35371 2023-07-18 12:15:08,031 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35371 2023-07-18 12:15:08,032 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35371 2023-07-18 12:15:08,032 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35371 2023-07-18 12:15:08,034 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:08,034 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:08,034 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:08,035 INFO [Listener at localhost/34965] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 12:15:08,035 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:08,035 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:08,035 INFO [Listener at localhost/34965] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:08,036 INFO [Listener at localhost/34965] http.HttpServer(1146): Jetty bound to port 45891 2023-07-18 12:15:08,036 INFO [Listener at localhost/34965] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:08,038 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,038 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@686f0631{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:08,039 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,039 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3bee122e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:08,161 INFO [Listener at localhost/34965] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:08,163 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:08,163 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:08,163 INFO [Listener at localhost/34965] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:15:08,165 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,166 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@15e5c50{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/jetty-0_0_0_0-45891-hbase-server-2_4_18-SNAPSHOT_jar-_-any-560849145931079045/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 12:15:08,168 INFO [Listener at localhost/34965] server.AbstractConnector(333): Started ServerConnector@3bdbfe7c{HTTP/1.1, (http/1.1)}{0.0.0.0:45891} 2023-07-18 12:15:08,168 INFO [Listener at localhost/34965] server.Server(415): Started @36898ms 2023-07-18 12:15:08,168 INFO [Listener at localhost/34965] master.HMaster(444): hbase.rootdir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113, hbase.cluster.distributed=false 2023-07-18 12:15:08,182 INFO [Listener at localhost/34965] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:08,182 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,182 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,182 INFO [Listener at localhost/34965] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:08,183 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,183 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:08,183 INFO [Listener at localhost/34965] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:08,183 INFO [Listener at localhost/34965] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40697 2023-07-18 12:15:08,184 INFO [Listener at localhost/34965] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:08,185 DEBUG [Listener at localhost/34965] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:08,185 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,186 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,187 INFO [Listener at localhost/34965] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40697 connecting to ZooKeeper ensemble=127.0.0.1:65201 2023-07-18 12:15:08,191 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:406970x0, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:08,192 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:406970x0, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:08,193 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40697-0x101785b7bbc0001 connected 2023-07-18 12:15:08,193 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:08,194 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:08,196 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40697 2023-07-18 12:15:08,197 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40697 2023-07-18 12:15:08,197 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40697 2023-07-18 12:15:08,197 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40697 2023-07-18 12:15:08,198 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40697 2023-07-18 12:15:08,200 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:08,200 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:08,200 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:08,200 INFO [Listener at localhost/34965] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:08,200 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:08,200 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:08,201 INFO [Listener at localhost/34965] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:08,202 INFO [Listener at localhost/34965] http.HttpServer(1146): Jetty bound to port 44997 2023-07-18 12:15:08,202 INFO [Listener at localhost/34965] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:08,204 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,204 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6d4587e2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:08,205 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,205 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10618b14{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:08,327 INFO [Listener at localhost/34965] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:08,328 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:08,328 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:08,328 INFO [Listener at localhost/34965] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 12:15:08,332 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,333 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@400b68e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/jetty-0_0_0_0-44997-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4025825743281572659/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:08,335 INFO [Listener at localhost/34965] server.AbstractConnector(333): Started ServerConnector@5d7a011f{HTTP/1.1, (http/1.1)}{0.0.0.0:44997} 2023-07-18 12:15:08,335 INFO [Listener at localhost/34965] server.Server(415): Started @37065ms 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:08,347 INFO [Listener at localhost/34965] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:08,348 INFO [Listener at localhost/34965] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35407 2023-07-18 12:15:08,348 INFO [Listener at localhost/34965] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:08,350 DEBUG [Listener at localhost/34965] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:08,351 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,352 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,353 INFO [Listener at localhost/34965] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35407 connecting to ZooKeeper ensemble=127.0.0.1:65201 2023-07-18 12:15:08,357 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:354070x0, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:08,359 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35407-0x101785b7bbc0002 connected 2023-07-18 12:15:08,359 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:08,360 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:08,361 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:08,361 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35407 2023-07-18 12:15:08,361 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35407 2023-07-18 12:15:08,362 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35407 2023-07-18 12:15:08,362 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35407 2023-07-18 12:15:08,362 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35407 2023-07-18 12:15:08,365 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:08,365 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:08,365 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:08,365 INFO [Listener at localhost/34965] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:08,365 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:08,366 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:08,366 INFO [Listener at localhost/34965] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:08,366 INFO [Listener at localhost/34965] http.HttpServer(1146): Jetty bound to port 39549 2023-07-18 12:15:08,366 INFO [Listener at localhost/34965] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:08,371 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,371 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68a050ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:08,371 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,372 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b5d8c66{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:08,386 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 12:15:08,513 INFO [Listener at localhost/34965] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:08,514 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:08,514 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:08,514 INFO [Listener at localhost/34965] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 12:15:08,515 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,515 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@567f5874{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/jetty-0_0_0_0-39549-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2225946222508005659/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:08,517 INFO [Listener at localhost/34965] server.AbstractConnector(333): Started ServerConnector@4a9e90bf{HTTP/1.1, (http/1.1)}{0.0.0.0:39549} 2023-07-18 12:15:08,517 INFO [Listener at localhost/34965] server.Server(415): Started @37247ms 2023-07-18 12:15:08,529 INFO [Listener at localhost/34965] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:08,529 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,529 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,529 INFO [Listener at localhost/34965] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:08,529 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:08,530 INFO [Listener at localhost/34965] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:08,530 INFO [Listener at localhost/34965] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:08,531 INFO [Listener at localhost/34965] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38273 2023-07-18 12:15:08,531 INFO [Listener at localhost/34965] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:08,533 DEBUG [Listener at localhost/34965] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:08,533 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,534 INFO [Listener at localhost/34965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,535 INFO [Listener at localhost/34965] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38273 connecting to ZooKeeper ensemble=127.0.0.1:65201 2023-07-18 12:15:08,539 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:382730x0, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:08,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38273-0x101785b7bbc0003 connected 2023-07-18 12:15:08,541 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:08,542 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:08,543 DEBUG [Listener at localhost/34965] zookeeper.ZKUtil(164): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:08,543 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38273 2023-07-18 12:15:08,543 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38273 2023-07-18 12:15:08,543 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38273 2023-07-18 12:15:08,544 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38273 2023-07-18 12:15:08,544 DEBUG [Listener at localhost/34965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38273 2023-07-18 12:15:08,546 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:08,546 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:08,546 INFO [Listener at localhost/34965] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:08,547 INFO [Listener at localhost/34965] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:08,547 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:08,547 INFO [Listener at localhost/34965] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:08,547 INFO [Listener at localhost/34965] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:08,548 INFO [Listener at localhost/34965] http.HttpServer(1146): Jetty bound to port 46517 2023-07-18 12:15:08,548 INFO [Listener at localhost/34965] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:08,549 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,549 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b63a583{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:08,549 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,549 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68dd3aa9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:08,666 INFO [Listener at localhost/34965] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:08,667 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:08,667 INFO [Listener at localhost/34965] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:08,668 INFO [Listener at localhost/34965] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 12:15:08,669 INFO [Listener at localhost/34965] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:08,670 INFO [Listener at localhost/34965] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2a6ab55c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/java.io.tmpdir/jetty-0_0_0_0-46517-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3972925615265575082/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:08,671 INFO [Listener at localhost/34965] server.AbstractConnector(333): Started ServerConnector@5d73cda0{HTTP/1.1, (http/1.1)}{0.0.0.0:46517} 2023-07-18 12:15:08,672 INFO [Listener at localhost/34965] server.Server(415): Started @37402ms 2023-07-18 12:15:08,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:08,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@72d41635{HTTP/1.1, (http/1.1)}{0.0.0.0:34123} 2023-07-18 12:15:08,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37407ms 2023-07-18 12:15:08,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,679 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 12:15:08,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,680 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:08,680 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:08,680 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:08,680 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:08,680 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:08,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:15:08,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:15:08,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35371,1689682507989 from backup master directory 2023-07-18 12:15:08,685 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,685 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 12:15:08,685 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:08,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/hbase.id with ID: 5dcd8402-2f41-41f3-9dfc-940dac18d9a8 2023-07-18 12:15:08,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:08,717 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:08,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6e141dce to 127.0.0.1:65201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:08,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@160f7e04, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:08,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:08,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 12:15:08,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:08,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store-tmp 2023-07-18 12:15:08,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:08,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 12:15:08,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:08,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:08,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 12:15:08,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:08,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:08,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:08,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/WALs/jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35371%2C1689682507989, suffix=, logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/WALs/jenkins-hbase4.apache.org,35371,1689682507989, archiveDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/oldWALs, maxLogs=10 2023-07-18 12:15:08,776 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK] 2023-07-18 12:15:08,776 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK] 2023-07-18 12:15:08,776 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK] 2023-07-18 12:15:08,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/WALs/jenkins-hbase4.apache.org,35371,1689682507989/jenkins-hbase4.apache.org%2C35371%2C1689682507989.1689682508759 2023-07-18 12:15:08,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK], DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK]] 2023-07-18 12:15:08,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:08,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:08,780 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:08,780 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:08,782 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:08,784 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 12:15:08,784 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 12:15:08,785 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:08,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:08,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:08,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:08,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:08,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11055695200, jitterRate=0.029641851782798767}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:08,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:08,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 12:15:08,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 12:15:08,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 12:15:08,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 12:15:08,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 12:15:08,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 12:15:08,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 12:15:08,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 12:15:08,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 12:15:08,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 12:15:08,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 12:15:08,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 12:15:08,801 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:08,801 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 12:15:08,801 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 12:15:08,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 12:15:08,804 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:08,804 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:08,804 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:08,804 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:08,804 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:08,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35371,1689682507989, sessionid=0x101785b7bbc0000, setting cluster-up flag (Was=false) 2023-07-18 12:15:08,810 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:08,814 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 12:15:08,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,818 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:08,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 12:15:08,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:08,825 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.hbase-snapshot/.tmp 2023-07-18 12:15:08,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 12:15:08,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 12:15:08,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 12:15:08,830 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:08,830 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 12:15:08,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-18 12:15:08,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 12:15:08,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 12:15:08,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 12:15:08,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 12:15:08,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:08,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,844 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689682538844 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 12:15:08,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,845 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 12:15:08,846 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 12:15:08,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 12:15:08,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 12:15:08,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 12:15:08,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 12:15:08,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 12:15:08,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682508847,5,FailOnTimeoutGroup] 2023-07-18 12:15:08,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682508847,5,FailOnTimeoutGroup] 2023-07-18 12:15:08,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 12:15:08,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,848 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:08,858 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:08,859 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:08,859 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113 2023-07-18 12:15:08,867 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:08,868 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:15:08,869 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/info 2023-07-18 12:15:08,870 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:15:08,870 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:08,870 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:15:08,872 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:15:08,872 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:15:08,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:08,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:15:08,874 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(951): ClusterId : 5dcd8402-2f41-41f3-9dfc-940dac18d9a8 2023-07-18 12:15:08,874 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(951): ClusterId : 5dcd8402-2f41-41f3-9dfc-940dac18d9a8 2023-07-18 12:15:08,875 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:08,876 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:08,874 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(951): ClusterId : 5dcd8402-2f41-41f3-9dfc-940dac18d9a8 2023-07-18 12:15:08,877 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:08,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/table 2023-07-18 12:15:08,878 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:15:08,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:08,879 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:08,879 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:08,879 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740 2023-07-18 12:15:08,879 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:08,879 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:08,879 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:08,879 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:08,879 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740 2023-07-18 12:15:08,882 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:15:08,882 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:08,883 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ReadOnlyZKClient(139): Connect 0x32a87b50 to 127.0.0.1:65201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:08,883 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:08,883 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:08,885 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ReadOnlyZKClient(139): Connect 0x66d78cfe to 127.0.0.1:65201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:08,885 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ReadOnlyZKClient(139): Connect 0x37a86ce4 to 127.0.0.1:65201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:08,887 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:15:08,898 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:08,898 DEBUG [RS:1;jenkins-hbase4:35407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c6f8375, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:08,899 DEBUG [RS:1;jenkins-hbase4:35407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38501020, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:08,899 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10207831520, jitterRate=-0.04932160675525665}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:15:08,899 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:15:08,899 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:15:08,899 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:15:08,899 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:15:08,899 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:15:08,899 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:15:08,903 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:08,903 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:15:08,903 DEBUG [RS:2;jenkins-hbase4:38273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d500794, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:08,903 DEBUG [RS:2;jenkins-hbase4:38273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78528e5a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:08,904 DEBUG [RS:0;jenkins-hbase4:40697] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68107e54, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:08,904 DEBUG [RS:0;jenkins-hbase4:40697] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c2fe78e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:08,905 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 12:15:08,905 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 12:15:08,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 12:15:08,907 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 12:15:08,908 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 12:15:08,913 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35407 2023-07-18 12:15:08,913 INFO [RS:1;jenkins-hbase4:35407] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:08,913 INFO [RS:1;jenkins-hbase4:35407] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:08,913 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:08,914 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35371,1689682507989 with isa=jenkins-hbase4.apache.org/172.31.14.131:35407, startcode=1689682508346 2023-07-18 12:15:08,914 DEBUG [RS:1;jenkins-hbase4:35407] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:08,916 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52213, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:08,917 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40697 2023-07-18 12:15:08,918 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35371] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,918 INFO [RS:0;jenkins-hbase4:40697] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:08,918 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:08,918 INFO [RS:0;jenkins-hbase4:40697] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:08,919 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:08,919 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 12:15:08,919 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113 2023-07-18 12:15:08,919 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42421 2023-07-18 12:15:08,919 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45891 2023-07-18 12:15:08,919 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35371,1689682507989 with isa=jenkins-hbase4.apache.org/172.31.14.131:40697, startcode=1689682508182 2023-07-18 12:15:08,920 DEBUG [RS:0;jenkins-hbase4:40697] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:08,921 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53329, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:08,921 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38273 2023-07-18 12:15:08,921 INFO [RS:2;jenkins-hbase4:38273] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:08,921 INFO [RS:2;jenkins-hbase4:38273] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:08,921 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35371] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,921 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:08,921 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:08,922 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 12:15:08,922 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113 2023-07-18 12:15:08,922 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42421 2023-07-18 12:15:08,922 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45891 2023-07-18 12:15:08,927 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:08,927 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35371,1689682507989 with isa=jenkins-hbase4.apache.org/172.31.14.131:38273, startcode=1689682508528 2023-07-18 12:15:08,927 DEBUG [RS:2;jenkins-hbase4:38273] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:08,928 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50511, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:08,929 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35371] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,929 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:08,929 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 12:15:08,929 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113 2023-07-18 12:15:08,929 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42421 2023-07-18 12:15:08,929 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45891 2023-07-18 12:15:08,930 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ZKUtil(162): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,931 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35407,1689682508346] 2023-07-18 12:15:08,931 WARN [RS:1;jenkins-hbase4:35407] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:08,931 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40697,1689682508182] 2023-07-18 12:15:08,931 INFO [RS:1;jenkins-hbase4:35407] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:08,931 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,931 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ZKUtil(162): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,931 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:08,931 WARN [RS:0;jenkins-hbase4:40697] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:08,932 INFO [RS:0;jenkins-hbase4:40697] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:08,932 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,932 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,932 WARN [RS:2;jenkins-hbase4:38273] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:08,933 INFO [RS:2;jenkins-hbase4:38273] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:08,933 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38273,1689682508528] 2023-07-18 12:15:08,933 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,938 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ZKUtil(162): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,938 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ZKUtil(162): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,938 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ZKUtil(162): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,939 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,939 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ZKUtil(162): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,939 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ZKUtil(162): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,939 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,939 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ZKUtil(162): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,939 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ZKUtil(162): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,940 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:08,940 INFO [RS:1;jenkins-hbase4:35407] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:08,940 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:08,940 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:08,942 INFO [RS:0;jenkins-hbase4:40697] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:08,942 INFO [RS:1;jenkins-hbase4:35407] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:08,942 INFO [RS:2;jenkins-hbase4:38273] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:08,942 INFO [RS:1;jenkins-hbase4:35407] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:08,942 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,942 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:08,944 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 INFO [RS:0;jenkins-hbase4:40697] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 DEBUG [RS:1;jenkins-hbase4:35407] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,944 INFO [RS:0;jenkins-hbase4:40697] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:08,945 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,945 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,945 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,945 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,945 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,946 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:08,947 INFO [RS:2;jenkins-hbase4:38273] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:08,948 INFO [RS:2;jenkins-hbase4:38273] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:08,948 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,948 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:08,948 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,948 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,949 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,949 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:0;jenkins-hbase4:40697] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,950 DEBUG [RS:2;jenkins-hbase4:38273] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:08,954 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,954 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,954 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,954 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,954 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,954 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,954 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,955 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,961 INFO [RS:1;jenkins-hbase4:35407] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:08,961 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35407,1689682508346-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,966 INFO [RS:2;jenkins-hbase4:38273] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:08,966 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38273,1689682508528-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,966 INFO [RS:0;jenkins-hbase4:40697] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:08,966 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40697,1689682508182-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,973 INFO [RS:1;jenkins-hbase4:35407] regionserver.Replication(203): jenkins-hbase4.apache.org,35407,1689682508346 started 2023-07-18 12:15:08,973 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35407,1689682508346, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35407, sessionid=0x101785b7bbc0002 2023-07-18 12:15:08,973 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:08,973 DEBUG [RS:1;jenkins-hbase4:35407] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,973 DEBUG [RS:1;jenkins-hbase4:35407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35407,1689682508346' 2023-07-18 12:15:08,973 DEBUG [RS:1;jenkins-hbase4:35407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:08,973 DEBUG [RS:1;jenkins-hbase4:35407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:08,974 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:08,974 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:08,974 DEBUG [RS:1;jenkins-hbase4:35407] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:08,974 DEBUG [RS:1;jenkins-hbase4:35407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35407,1689682508346' 2023-07-18 12:15:08,974 DEBUG [RS:1;jenkins-hbase4:35407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:08,974 DEBUG [RS:1;jenkins-hbase4:35407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:08,975 DEBUG [RS:1;jenkins-hbase4:35407] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:08,975 INFO [RS:1;jenkins-hbase4:35407] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 12:15:08,977 INFO [RS:2;jenkins-hbase4:38273] regionserver.Replication(203): jenkins-hbase4.apache.org,38273,1689682508528 started 2023-07-18 12:15:08,977 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38273,1689682508528, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38273, sessionid=0x101785b7bbc0003 2023-07-18 12:15:08,977 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:08,977 DEBUG [RS:2;jenkins-hbase4:38273] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,977 DEBUG [RS:2;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38273,1689682508528' 2023-07-18 12:15:08,977 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,977 DEBUG [RS:2;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:08,977 DEBUG [RS:2;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:08,977 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ZKUtil(398): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 12:15:08,977 INFO [RS:1;jenkins-hbase4:35407] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38273,1689682508528' 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:08,978 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:08,978 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,978 DEBUG [RS:2;jenkins-hbase4:38273] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:08,978 INFO [RS:2;jenkins-hbase4:38273] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 12:15:08,978 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,978 INFO [RS:0;jenkins-hbase4:40697] regionserver.Replication(203): jenkins-hbase4.apache.org,40697,1689682508182 started 2023-07-18 12:15:08,978 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40697,1689682508182, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40697, sessionid=0x101785b7bbc0001 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40697,1689682508182' 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:08,979 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ZKUtil(398): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 12:15:08,979 INFO [RS:2;jenkins-hbase4:38273] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 12:15:08,979 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,979 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40697,1689682508182' 2023-07-18 12:15:08,979 DEBUG [RS:0;jenkins-hbase4:40697] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:08,980 DEBUG [RS:0;jenkins-hbase4:40697] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:08,980 DEBUG [RS:0;jenkins-hbase4:40697] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:08,980 INFO [RS:0;jenkins-hbase4:40697] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 12:15:08,980 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,980 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ZKUtil(398): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 12:15:08,980 INFO [RS:0;jenkins-hbase4:40697] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 12:15:08,980 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:08,980 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,058 DEBUG [jenkins-hbase4:35371] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 12:15:09,059 DEBUG [jenkins-hbase4:35371] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:09,059 DEBUG [jenkins-hbase4:35371] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:09,059 DEBUG [jenkins-hbase4:35371] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:09,059 DEBUG [jenkins-hbase4:35371] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:09,059 DEBUG [jenkins-hbase4:35371] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:09,060 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35407,1689682508346, state=OPENING 2023-07-18 12:15:09,062 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 12:15:09,063 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:09,064 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:15:09,064 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35407,1689682508346}] 2023-07-18 12:15:09,081 INFO [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35407%2C1689682508346, suffix=, logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,35407,1689682508346, archiveDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs, maxLogs=32 2023-07-18 12:15:09,081 INFO [RS:2;jenkins-hbase4:38273] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38273%2C1689682508528, suffix=, logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,38273,1689682508528, archiveDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs, maxLogs=32 2023-07-18 12:15:09,082 INFO [RS:0;jenkins-hbase4:40697] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40697%2C1689682508182, suffix=, logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,40697,1689682508182, archiveDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs, maxLogs=32 2023-07-18 12:15:09,107 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK] 2023-07-18 12:15:09,107 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK] 2023-07-18 12:15:09,108 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK] 2023-07-18 12:15:09,120 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK] 2023-07-18 12:15:09,120 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK] 2023-07-18 12:15:09,120 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK] 2023-07-18 12:15:09,121 INFO [RS:0;jenkins-hbase4:40697] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,40697,1689682508182/jenkins-hbase4.apache.org%2C40697%2C1689682508182.1689682509090 2023-07-18 12:15:09,121 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK] 2023-07-18 12:15:09,121 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK] 2023-07-18 12:15:09,121 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK] 2023-07-18 12:15:09,126 DEBUG [RS:0;jenkins-hbase4:40697] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK], DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK]] 2023-07-18 12:15:09,127 INFO [RS:2;jenkins-hbase4:38273] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,38273,1689682508528/jenkins-hbase4.apache.org%2C38273%2C1689682508528.1689682509089 2023-07-18 12:15:09,131 DEBUG [RS:2;jenkins-hbase4:38273] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK], DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK]] 2023-07-18 12:15:09,131 INFO [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,35407,1689682508346/jenkins-hbase4.apache.org%2C35407%2C1689682508346.1689682509090 2023-07-18 12:15:09,131 DEBUG [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK], DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK]] 2023-07-18 12:15:09,138 WARN [ReadOnlyZKClient-127.0.0.1:65201@0x6e141dce] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 12:15:09,139 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:09,142 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40608, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:09,143 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35407] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:40608 deadline: 1689682569143, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:09,218 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:09,220 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:15:09,221 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40616, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:15:09,226 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 12:15:09,226 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:09,227 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35407%2C1689682508346.meta, suffix=.meta, logDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,35407,1689682508346, archiveDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs, maxLogs=32 2023-07-18 12:15:09,242 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK] 2023-07-18 12:15:09,242 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK] 2023-07-18 12:15:09,242 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK] 2023-07-18 12:15:09,244 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/WALs/jenkins-hbase4.apache.org,35407,1689682508346/jenkins-hbase4.apache.org%2C35407%2C1689682508346.meta.1689682509228.meta 2023-07-18 12:15:09,246 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36165,DS-490e10fa-6c99-4ae0-b6f3-c06f9dce1edb,DISK], DatanodeInfoWithStorage[127.0.0.1:36899,DS-a331fdf3-c1ee-43df-aac6-e512da806c0b,DISK], DatanodeInfoWithStorage[127.0.0.1:44219,DS-6ad32553-8f07-4eb5-9b9a-30befea7bbc9,DISK]] 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 12:15:09,247 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 12:15:09,247 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 12:15:09,248 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:15:09,249 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/info 2023-07-18 12:15:09,249 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/info 2023-07-18 12:15:09,250 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:15:09,250 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:09,250 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:15:09,251 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:15:09,251 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:15:09,252 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:15:09,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:09,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:15:09,253 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/table 2023-07-18 12:15:09,253 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/table 2023-07-18 12:15:09,254 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:15:09,254 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:09,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740 2023-07-18 12:15:09,256 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740 2023-07-18 12:15:09,258 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:15:09,259 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:15:09,259 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11662122720, jitterRate=0.08611981570720673}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:15:09,260 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:15:09,260 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689682509218 2023-07-18 12:15:09,264 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 12:15:09,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 12:15:09,265 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35407,1689682508346, state=OPEN 2023-07-18 12:15:09,266 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:15:09,267 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:15:09,268 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 12:15:09,268 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35407,1689682508346 in 203 msec 2023-07-18 12:15:09,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 12:15:09,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 363 msec 2023-07-18 12:15:09,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 439 msec 2023-07-18 12:15:09,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689682509271, completionTime=-1 2023-07-18 12:15:09,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 12:15:09,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 12:15:09,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 12:15:09,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689682569275 2023-07-18 12:15:09,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689682629275 2023-07-18 12:15:09,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35371,1689682507989-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35371,1689682507989-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35371,1689682507989-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35371, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 12:15:09,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:09,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 12:15:09,283 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 12:15:09,284 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:09,285 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:09,286 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,286 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6 empty. 2023-07-18 12:15:09,287 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,287 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 12:15:09,297 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:09,299 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 626837fa249245c8d0bc1b007ca8cbf6, NAME => 'hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp 2023-07-18 12:15:09,307 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:09,307 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 626837fa249245c8d0bc1b007ca8cbf6, disabling compactions & flushes 2023-07-18 12:15:09,307 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,307 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,307 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. after waiting 0 ms 2023-07-18 12:15:09,307 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,307 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,307 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 626837fa249245c8d0bc1b007ca8cbf6: 2023-07-18 12:15:09,309 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:09,310 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682509310"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682509310"}]},"ts":"1689682509310"} 2023-07-18 12:15:09,312 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:09,314 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:09,314 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682509314"}]},"ts":"1689682509314"} 2023-07-18 12:15:09,315 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 12:15:09,318 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:09,318 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:09,318 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:09,318 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:09,318 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:09,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=626837fa249245c8d0bc1b007ca8cbf6, ASSIGN}] 2023-07-18 12:15:09,321 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=626837fa249245c8d0bc1b007ca8cbf6, ASSIGN 2023-07-18 12:15:09,322 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=626837fa249245c8d0bc1b007ca8cbf6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40697,1689682508182; forceNewPlan=false, retain=false 2023-07-18 12:15:09,447 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:09,449 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 12:15:09,450 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:09,451 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:09,453 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,453 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9 empty. 2023-07-18 12:15:09,454 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,454 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 12:15:09,465 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:09,466 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c46fe29c6ff7902355765deca34d47a9, NAME => 'hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp 2023-07-18 12:15:09,472 INFO [jenkins-hbase4:35371] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:09,473 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=626837fa249245c8d0bc1b007ca8cbf6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:09,474 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682509473"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682509473"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682509473"}]},"ts":"1689682509473"} 2023-07-18 12:15:09,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 626837fa249245c8d0bc1b007ca8cbf6, server=jenkins-hbase4.apache.org,40697,1689682508182}] 2023-07-18 12:15:09,487 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:09,487 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c46fe29c6ff7902355765deca34d47a9, disabling compactions & flushes 2023-07-18 12:15:09,487 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,487 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,487 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. after waiting 0 ms 2023-07-18 12:15:09,487 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,487 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,487 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c46fe29c6ff7902355765deca34d47a9: 2023-07-18 12:15:09,490 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:09,491 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682509491"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682509491"}]},"ts":"1689682509491"} 2023-07-18 12:15:09,492 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:09,493 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:09,493 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682509493"}]},"ts":"1689682509493"} 2023-07-18 12:15:09,494 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 12:15:09,498 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:09,498 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:09,498 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:09,498 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:09,498 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:09,498 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c46fe29c6ff7902355765deca34d47a9, ASSIGN}] 2023-07-18 12:15:09,502 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c46fe29c6ff7902355765deca34d47a9, ASSIGN 2023-07-18 12:15:09,502 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c46fe29c6ff7902355765deca34d47a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38273,1689682508528; forceNewPlan=false, retain=false 2023-07-18 12:15:09,636 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:09,636 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:15:09,638 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42164, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:15:09,644 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,644 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 626837fa249245c8d0bc1b007ca8cbf6, NAME => 'hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:09,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:09,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,645 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,646 INFO [StoreOpener-626837fa249245c8d0bc1b007ca8cbf6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,647 DEBUG [StoreOpener-626837fa249245c8d0bc1b007ca8cbf6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/info 2023-07-18 12:15:09,647 DEBUG [StoreOpener-626837fa249245c8d0bc1b007ca8cbf6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/info 2023-07-18 12:15:09,648 INFO [StoreOpener-626837fa249245c8d0bc1b007ca8cbf6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 626837fa249245c8d0bc1b007ca8cbf6 columnFamilyName info 2023-07-18 12:15:09,648 INFO [StoreOpener-626837fa249245c8d0bc1b007ca8cbf6-1] regionserver.HStore(310): Store=626837fa249245c8d0bc1b007ca8cbf6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:09,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:09,653 INFO [jenkins-hbase4:35371] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:09,653 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=c46fe29c6ff7902355765deca34d47a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:09,654 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682509653"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682509653"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682509653"}]},"ts":"1689682509653"} 2023-07-18 12:15:09,654 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:09,655 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 626837fa249245c8d0bc1b007ca8cbf6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10172362720, jitterRate=-0.0526248961687088}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:09,655 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 626837fa249245c8d0bc1b007ca8cbf6: 2023-07-18 12:15:09,656 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure c46fe29c6ff7902355765deca34d47a9, server=jenkins-hbase4.apache.org,38273,1689682508528}] 2023-07-18 12:15:09,656 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6., pid=7, masterSystemTime=1689682509636 2023-07-18 12:15:09,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,660 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:09,661 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=626837fa249245c8d0bc1b007ca8cbf6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:09,661 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682509661"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682509661"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682509661"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682509661"}]},"ts":"1689682509661"} 2023-07-18 12:15:09,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 12:15:09,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 626837fa249245c8d0bc1b007ca8cbf6, server=jenkins-hbase4.apache.org,40697,1689682508182 in 187 msec 2023-07-18 12:15:09,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 12:15:09,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=626837fa249245c8d0bc1b007ca8cbf6, ASSIGN in 346 msec 2023-07-18 12:15:09,666 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:09,666 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682509666"}]},"ts":"1689682509666"} 2023-07-18 12:15:09,667 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 12:15:09,671 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:09,672 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 389 msec 2023-07-18 12:15:09,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 12:15:09,685 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:09,685 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:09,689 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:09,690 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42172, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:09,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 12:15:09,700 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:09,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-18 12:15:09,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 12:15:09,706 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-18 12:15:09,706 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 12:15:09,810 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:09,810 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:15:09,811 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47720, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:15:09,815 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c46fe29c6ff7902355765deca34d47a9, NAME => 'hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. service=MultiRowMutationService 2023-07-18 12:15:09,815 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,817 INFO [StoreOpener-c46fe29c6ff7902355765deca34d47a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,818 DEBUG [StoreOpener-c46fe29c6ff7902355765deca34d47a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/m 2023-07-18 12:15:09,818 DEBUG [StoreOpener-c46fe29c6ff7902355765deca34d47a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/m 2023-07-18 12:15:09,818 INFO [StoreOpener-c46fe29c6ff7902355765deca34d47a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c46fe29c6ff7902355765deca34d47a9 columnFamilyName m 2023-07-18 12:15:09,819 INFO [StoreOpener-c46fe29c6ff7902355765deca34d47a9-1] regionserver.HStore(310): Store=c46fe29c6ff7902355765deca34d47a9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:09,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:09,877 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:09,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c46fe29c6ff7902355765deca34d47a9; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@55ec0a33, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:09,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c46fe29c6ff7902355765deca34d47a9: 2023-07-18 12:15:09,879 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9., pid=9, masterSystemTime=1689682509810 2023-07-18 12:15:09,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:09,884 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=c46fe29c6ff7902355765deca34d47a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:09,884 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682509884"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682509884"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682509884"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682509884"}]},"ts":"1689682509884"} 2023-07-18 12:15:09,887 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-18 12:15:09,887 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure c46fe29c6ff7902355765deca34d47a9, server=jenkins-hbase4.apache.org,38273,1689682508528 in 229 msec 2023-07-18 12:15:09,888 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 12:15:09,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c46fe29c6ff7902355765deca34d47a9, ASSIGN in 389 msec 2023-07-18 12:15:09,898 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:09,901 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 195 msec 2023-07-18 12:15:09,902 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:09,902 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682509902"}]},"ts":"1689682509902"} 2023-07-18 12:15:09,904 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 12:15:09,907 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:09,908 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 460 msec 2023-07-18 12:15:09,909 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 12:15:09,911 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 12:15:09,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.225sec 2023-07-18 12:15:09,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-18 12:15:09,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:09,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-18 12:15:09,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-18 12:15:09,914 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:09,915 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:09,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-18 12:15:09,917 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:09,917 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b empty. 2023-07-18 12:15:09,918 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:09,918 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-18 12:15:09,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-18 12:15:09,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-18 12:15:09,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:09,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 12:15:09,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 12:15:09,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35371,1689682507989-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 12:15:09,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35371,1689682507989-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 12:15:09,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 12:15:09,944 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:09,946 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 25bbb0242e198e2cda8ac3b33964c58b, NAME => 'hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp 2023-07-18 12:15:09,964 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:09,964 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:09,964 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 25bbb0242e198e2cda8ac3b33964c58b, disabling compactions & flushes 2023-07-18 12:15:09,964 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:09,964 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:09,964 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. after waiting 0 ms 2023-07-18 12:15:09,964 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:09,964 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:09,964 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 25bbb0242e198e2cda8ac3b33964c58b: 2023-07-18 12:15:09,965 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47722, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:09,967 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 12:15:09,967 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 12:15:09,969 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:09,971 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689682509971"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682509971"}]},"ts":"1689682509971"} 2023-07-18 12:15:09,972 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:09,973 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:09,973 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682509973"}]},"ts":"1689682509973"} 2023-07-18 12:15:09,974 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:09,975 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:09,975 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-18 12:15:09,978 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:15:09,979 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:09,979 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:09,979 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:09,979 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:09,979 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:09,979 DEBUG [Listener at localhost/34965] zookeeper.ReadOnlyZKClient(139): Connect 0x71504e91 to 127.0.0.1:65201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:09,979 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=25bbb0242e198e2cda8ac3b33964c58b, ASSIGN}] 2023-07-18 12:15:09,981 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35371,1689682507989] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 12:15:09,989 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=25bbb0242e198e2cda8ac3b33964c58b, ASSIGN 2023-07-18 12:15:09,990 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=25bbb0242e198e2cda8ac3b33964c58b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35407,1689682508346; forceNewPlan=false, retain=false 2023-07-18 12:15:09,991 DEBUG [Listener at localhost/34965] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a950151, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:09,995 DEBUG [hconnection-0x7bcb3b7d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:09,997 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:09,998 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:09,998 INFO [Listener at localhost/34965] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:10,002 DEBUG [Listener at localhost/34965] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 12:15:10,003 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 12:15:10,006 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 12:15:10,006 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:10,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 12:15:10,008 DEBUG [Listener at localhost/34965] zookeeper.ReadOnlyZKClient(139): Connect 0x054e4bf8 to 127.0.0.1:65201 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:10,012 DEBUG [Listener at localhost/34965] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@277c98ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:10,013 INFO [Listener at localhost/34965] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:65201 2023-07-18 12:15:10,017 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:10,018 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101785b7bbc000a connected 2023-07-18 12:15:10,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-18 12:15:10,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-18 12:15:10,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 12:15:10,035 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:10,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 17 msec 2023-07-18 12:15:10,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 12:15:10,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:10,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-18 12:15:10,140 INFO [jenkins-hbase4:35371] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:10,141 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=25bbb0242e198e2cda8ac3b33964c58b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:10,141 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689682510141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682510141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682510141"}]},"ts":"1689682510141"} 2023-07-18 12:15:10,142 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:10,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-18 12:15:10,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 25bbb0242e198e2cda8ac3b33964c58b, server=jenkins-hbase4.apache.org,35407,1689682508346}] 2023-07-18 12:15:10,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:15:10,144 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:10,145 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:15:10,148 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:10,150 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,151 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 empty. 2023-07-18 12:15:10,151 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,151 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 12:15:10,193 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:10,195 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 256e0878c6119a5e6d098208d3dadf40, NAME => 'np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp 2023-07-18 12:15:10,214 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:10,214 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 256e0878c6119a5e6d098208d3dadf40, disabling compactions & flushes 2023-07-18 12:15:10,214 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. after waiting 0 ms 2023-07-18 12:15:10,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,215 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 256e0878c6119a5e6d098208d3dadf40: 2023-07-18 12:15:10,217 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:10,218 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682510218"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682510218"}]},"ts":"1689682510218"} 2023-07-18 12:15:10,220 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:10,221 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:10,221 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682510221"}]},"ts":"1689682510221"} 2023-07-18 12:15:10,222 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-18 12:15:10,225 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:10,226 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:10,226 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:10,226 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:10,226 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:10,226 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, ASSIGN}] 2023-07-18 12:15:10,227 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, ASSIGN 2023-07-18 12:15:10,228 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35407,1689682508346; forceNewPlan=false, retain=false 2023-07-18 12:15:10,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:15:10,302 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:10,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 25bbb0242e198e2cda8ac3b33964c58b, NAME => 'hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:10,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:10,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,305 INFO [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,306 DEBUG [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b/q 2023-07-18 12:15:10,306 DEBUG [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b/q 2023-07-18 12:15:10,307 INFO [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 25bbb0242e198e2cda8ac3b33964c58b columnFamilyName q 2023-07-18 12:15:10,307 INFO [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] regionserver.HStore(310): Store=25bbb0242e198e2cda8ac3b33964c58b/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:10,307 INFO [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,309 DEBUG [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b/u 2023-07-18 12:15:10,309 DEBUG [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b/u 2023-07-18 12:15:10,309 INFO [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 25bbb0242e198e2cda8ac3b33964c58b columnFamilyName u 2023-07-18 12:15:10,310 INFO [StoreOpener-25bbb0242e198e2cda8ac3b33964c58b-1] regionserver.HStore(310): Store=25bbb0242e198e2cda8ac3b33964c58b/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:10,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-18 12:15:10,314 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:10,316 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:10,316 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 25bbb0242e198e2cda8ac3b33964c58b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11851351680, jitterRate=0.10374313592910767}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-18 12:15:10,316 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 25bbb0242e198e2cda8ac3b33964c58b: 2023-07-18 12:15:10,317 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b., pid=16, masterSystemTime=1689682510297 2023-07-18 12:15:10,318 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:10,318 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:10,319 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=25bbb0242e198e2cda8ac3b33964c58b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:10,319 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689682510319"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682510319"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682510319"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682510319"}]},"ts":"1689682510319"} 2023-07-18 12:15:10,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-18 12:15:10,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 25bbb0242e198e2cda8ac3b33964c58b, server=jenkins-hbase4.apache.org,35407,1689682508346 in 177 msec 2023-07-18 12:15:10,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 12:15:10,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=25bbb0242e198e2cda8ac3b33964c58b, ASSIGN in 343 msec 2023-07-18 12:15:10,324 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:10,324 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682510324"}]},"ts":"1689682510324"} 2023-07-18 12:15:10,325 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-18 12:15:10,327 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:10,328 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 415 msec 2023-07-18 12:15:10,378 INFO [jenkins-hbase4:35371] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:10,380 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=256e0878c6119a5e6d098208d3dadf40, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:10,380 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682510379"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682510379"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682510379"}]},"ts":"1689682510379"} 2023-07-18 12:15:10,381 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 256e0878c6119a5e6d098208d3dadf40, server=jenkins-hbase4.apache.org,35407,1689682508346}] 2023-07-18 12:15:10,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:15:10,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 256e0878c6119a5e6d098208d3dadf40, NAME => 'np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:10,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:10,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,540 INFO [StoreOpener-256e0878c6119a5e6d098208d3dadf40-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,541 DEBUG [StoreOpener-256e0878c6119a5e6d098208d3dadf40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/fam1 2023-07-18 12:15:10,542 DEBUG [StoreOpener-256e0878c6119a5e6d098208d3dadf40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/fam1 2023-07-18 12:15:10,542 INFO [StoreOpener-256e0878c6119a5e6d098208d3dadf40-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 256e0878c6119a5e6d098208d3dadf40 columnFamilyName fam1 2023-07-18 12:15:10,543 INFO [StoreOpener-256e0878c6119a5e6d098208d3dadf40-1] regionserver.HStore(310): Store=256e0878c6119a5e6d098208d3dadf40/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:10,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:10,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:10,551 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 256e0878c6119a5e6d098208d3dadf40; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10854800640, jitterRate=0.010932087898254395}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:10,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 256e0878c6119a5e6d098208d3dadf40: 2023-07-18 12:15:10,552 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40., pid=18, masterSystemTime=1689682510533 2023-07-18 12:15:10,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:10,555 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=256e0878c6119a5e6d098208d3dadf40, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:10,555 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682510555"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682510555"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682510555"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682510555"}]},"ts":"1689682510555"} 2023-07-18 12:15:10,559 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 12:15:10,559 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 256e0878c6119a5e6d098208d3dadf40, server=jenkins-hbase4.apache.org,35407,1689682508346 in 176 msec 2023-07-18 12:15:10,560 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-18 12:15:10,560 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, ASSIGN in 333 msec 2023-07-18 12:15:10,561 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:10,561 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682510561"}]},"ts":"1689682510561"} 2023-07-18 12:15:10,562 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-18 12:15:10,567 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:10,569 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 430 msec 2023-07-18 12:15:10,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 12:15:10,748 INFO [Listener at localhost/34965] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-18 12:15:10,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:10,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-18 12:15:10,753 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:10,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-18 12:15:10,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 12:15:10,789 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=38 msec 2023-07-18 12:15:10,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 12:15:10,859 INFO [Listener at localhost/34965] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-18 12:15:10,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:10,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:10,862 INFO [Listener at localhost/34965] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-18 12:15:10,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-18 12:15:10,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-18 12:15:10,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 12:15:10,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682510867"}]},"ts":"1689682510867"} 2023-07-18 12:15:10,868 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-18 12:15:10,869 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-18 12:15:10,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, UNASSIGN}] 2023-07-18 12:15:10,870 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, UNASSIGN 2023-07-18 12:15:10,871 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=256e0878c6119a5e6d098208d3dadf40, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:10,871 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682510871"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682510871"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682510871"}]},"ts":"1689682510871"} 2023-07-18 12:15:10,872 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 256e0878c6119a5e6d098208d3dadf40, server=jenkins-hbase4.apache.org,35407,1689682508346}] 2023-07-18 12:15:10,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 12:15:11,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 256e0878c6119a5e6d098208d3dadf40, disabling compactions & flushes 2023-07-18 12:15:11,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. after waiting 0 ms 2023-07-18 12:15:11,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:11,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:11,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40. 2023-07-18 12:15:11,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 256e0878c6119a5e6d098208d3dadf40: 2023-07-18 12:15:11,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:11,031 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=256e0878c6119a5e6d098208d3dadf40, regionState=CLOSED 2023-07-18 12:15:11,031 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682511031"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682511031"}]},"ts":"1689682511031"} 2023-07-18 12:15:11,033 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-18 12:15:11,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 256e0878c6119a5e6d098208d3dadf40, server=jenkins-hbase4.apache.org,35407,1689682508346 in 160 msec 2023-07-18 12:15:11,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 12:15:11,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=256e0878c6119a5e6d098208d3dadf40, UNASSIGN in 164 msec 2023-07-18 12:15:11,036 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682511035"}]},"ts":"1689682511035"} 2023-07-18 12:15:11,037 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-18 12:15:11,039 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-18 12:15:11,040 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 176 msec 2023-07-18 12:15:11,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 12:15:11,169 INFO [Listener at localhost/34965] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-18 12:15:11,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-18 12:15:11,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-18 12:15:11,172 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 12:15:11,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-18 12:15:11,173 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 12:15:11,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:11,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:15:11,176 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:11,178 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/fam1, FileablePath, hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/recovered.edits] 2023-07-18 12:15:11,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 12:15:11,183 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/recovered.edits/4.seqid to hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/archive/data/np1/table1/256e0878c6119a5e6d098208d3dadf40/recovered.edits/4.seqid 2023-07-18 12:15:11,184 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/.tmp/data/np1/table1/256e0878c6119a5e6d098208d3dadf40 2023-07-18 12:15:11,184 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 12:15:11,186 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 12:15:11,188 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-18 12:15:11,189 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-18 12:15:11,190 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 12:15:11,190 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-18 12:15:11,190 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682511190"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:11,192 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 12:15:11,192 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 256e0878c6119a5e6d098208d3dadf40, NAME => 'np1:table1,,1689682510136.256e0878c6119a5e6d098208d3dadf40.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 12:15:11,192 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-18 12:15:11,192 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682511192"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:11,193 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-18 12:15:11,196 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 12:15:11,197 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-18 12:15:11,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 12:15:11,280 INFO [Listener at localhost/34965] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-18 12:15:11,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-18 12:15:11,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-18 12:15:11,297 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 12:15:11,301 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 12:15:11,303 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 12:15:11,304 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-18 12:15:11,304 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:11,305 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 12:15:11,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 12:15:11,307 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 12:15:11,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-18 12:15:11,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35371] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 12:15:11,406 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 12:15:11,407 INFO [Listener at localhost/34965] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x71504e91 to 127.0.0.1:65201 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] util.JVMClusterUtil(257): Found active master hash=383246483, stopped=false 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 12:15:11,407 DEBUG [Listener at localhost/34965] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-18 12:15:11,408 INFO [Listener at localhost/34965] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:11,409 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:11,409 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:11,409 INFO [Listener at localhost/34965] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 12:15:11,409 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:11,409 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:11,409 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:11,409 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:11,411 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:11,411 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:11,412 DEBUG [Listener at localhost/34965] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e141dce to 127.0.0.1:65201 2023-07-18 12:15:11,412 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:11,412 DEBUG [Listener at localhost/34965] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,412 INFO [Listener at localhost/34965] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40697,1689682508182' ***** 2023-07-18 12:15:11,412 INFO [Listener at localhost/34965] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:11,412 INFO [Listener at localhost/34965] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35407,1689682508346' ***** 2023-07-18 12:15:11,412 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:11,412 INFO [Listener at localhost/34965] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:11,414 INFO [Listener at localhost/34965] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38273,1689682508528' ***** 2023-07-18 12:15:11,414 INFO [Listener at localhost/34965] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:11,414 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:11,415 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:11,428 INFO [RS:2;jenkins-hbase4:38273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2a6ab55c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:11,428 INFO [RS:1;jenkins-hbase4:35407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@567f5874{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:11,428 INFO [RS:0;jenkins-hbase4:40697] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@400b68e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:11,429 INFO [RS:2;jenkins-hbase4:38273] server.AbstractConnector(383): Stopped ServerConnector@5d73cda0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:11,429 INFO [RS:1;jenkins-hbase4:35407] server.AbstractConnector(383): Stopped ServerConnector@4a9e90bf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:11,429 INFO [RS:2;jenkins-hbase4:38273] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:11,429 INFO [RS:0;jenkins-hbase4:40697] server.AbstractConnector(383): Stopped ServerConnector@5d7a011f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:11,429 INFO [RS:1;jenkins-hbase4:35407] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:11,429 INFO [RS:0;jenkins-hbase4:40697] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:11,429 INFO [RS:2;jenkins-hbase4:38273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68dd3aa9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:11,432 INFO [RS:0;jenkins-hbase4:40697] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10618b14{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:11,432 INFO [RS:2;jenkins-hbase4:38273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b63a583{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:11,432 INFO [RS:0;jenkins-hbase4:40697] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6d4587e2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:11,432 INFO [RS:1;jenkins-hbase4:35407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b5d8c66{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:11,432 INFO [RS:1;jenkins-hbase4:35407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68a050ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:11,433 INFO [RS:0;jenkins-hbase4:40697] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:11,433 INFO [RS:0;jenkins-hbase4:40697] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:11,433 INFO [RS:2;jenkins-hbase4:38273] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:11,433 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:11,433 INFO [RS:2;jenkins-hbase4:38273] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:11,433 INFO [RS:0;jenkins-hbase4:40697] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:11,434 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:11,434 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(3305): Received CLOSE for 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:11,433 INFO [RS:2;jenkins-hbase4:38273] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:11,435 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(3305): Received CLOSE for c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:11,435 INFO [RS:1;jenkins-hbase4:35407] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:11,436 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:11,436 DEBUG [RS:0;jenkins-hbase4:40697] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66d78cfe to 127.0.0.1:65201 2023-07-18 12:15:11,436 INFO [RS:1;jenkins-hbase4:35407] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:11,436 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:11,436 DEBUG [RS:0;jenkins-hbase4:40697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,437 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 12:15:11,437 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1478): Online Regions={626837fa249245c8d0bc1b007ca8cbf6=hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6.} 2023-07-18 12:15:11,436 INFO [RS:1;jenkins-hbase4:35407] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:11,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c46fe29c6ff7902355765deca34d47a9, disabling compactions & flushes 2023-07-18 12:15:11,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 626837fa249245c8d0bc1b007ca8cbf6, disabling compactions & flushes 2023-07-18 12:15:11,437 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:11,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:11,438 DEBUG [RS:2;jenkins-hbase4:38273] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x37a86ce4 to 127.0.0.1:65201 2023-07-18 12:15:11,438 DEBUG [RS:2;jenkins-hbase4:38273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,438 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 12:15:11,438 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1478): Online Regions={c46fe29c6ff7902355765deca34d47a9=hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9.} 2023-07-18 12:15:11,438 DEBUG [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1504): Waiting on c46fe29c6ff7902355765deca34d47a9 2023-07-18 12:15:11,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:11,438 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(3305): Received CLOSE for 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:11,438 DEBUG [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1504): Waiting on 626837fa249245c8d0bc1b007ca8cbf6 2023-07-18 12:15:11,438 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:11,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:11,438 DEBUG [RS:1;jenkins-hbase4:35407] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x32a87b50 to 127.0.0.1:65201 2023-07-18 12:15:11,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:11,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 25bbb0242e198e2cda8ac3b33964c58b, disabling compactions & flushes 2023-07-18 12:15:11,439 DEBUG [RS:1;jenkins-hbase4:35407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. after waiting 0 ms 2023-07-18 12:15:11,439 INFO [RS:1;jenkins-hbase4:35407] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:11,439 INFO [RS:1;jenkins-hbase4:35407] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:11,439 INFO [RS:1;jenkins-hbase4:35407] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:11,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:11,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. after waiting 0 ms 2023-07-18 12:15:11,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:11,440 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 12:15:11,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:11,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. after waiting 0 ms 2023-07-18 12:15:11,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:11,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:11,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 626837fa249245c8d0bc1b007ca8cbf6 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-18 12:15:11,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c46fe29c6ff7902355765deca34d47a9 1/1 column families, dataSize=633 B heapSize=1.09 KB 2023-07-18 12:15:11,442 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 12:15:11,442 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1478): Online Regions={25bbb0242e198e2cda8ac3b33964c58b=hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b., 1588230740=hbase:meta,,1.1588230740} 2023-07-18 12:15:11,443 DEBUG [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1504): Waiting on 1588230740, 25bbb0242e198e2cda8ac3b33964c58b 2023-07-18 12:15:11,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:15:11,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:15:11,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:15:11,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:15:11,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:15:11,446 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-18 12:15:11,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/quota/25bbb0242e198e2cda8ac3b33964c58b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:11,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:11,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 25bbb0242e198e2cda8ac3b33964c58b: 2023-07-18 12:15:11,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689682509911.25bbb0242e198e2cda8ac3b33964c58b. 2023-07-18 12:15:11,462 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:11,462 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:11,462 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:11,482 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/.tmp/info/379b4772289b4102a5514362392efed4 2023-07-18 12:15:11,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=633 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/.tmp/m/e96826b5e9ab449abf2d0aa95a1a8f8e 2023-07-18 12:15:11,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 379b4772289b4102a5514362392efed4 2023-07-18 12:15:11,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/.tmp/info/22f15aea30404d45a951036d1749573d 2023-07-18 12:15:11,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/.tmp/m/e96826b5e9ab449abf2d0aa95a1a8f8e as hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/m/e96826b5e9ab449abf2d0aa95a1a8f8e 2023-07-18 12:15:11,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/m/e96826b5e9ab449abf2d0aa95a1a8f8e, entries=1, sequenceid=7, filesize=4.9 K 2023-07-18 12:15:11,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~633 B/633, heapSize ~1.07 KB/1096, currentSize=0 B/0 for c46fe29c6ff7902355765deca34d47a9 in 77ms, sequenceid=7, compaction requested=false 2023-07-18 12:15:11,521 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 22f15aea30404d45a951036d1749573d 2023-07-18 12:15:11,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/.tmp/info/22f15aea30404d45a951036d1749573d as hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/info/22f15aea30404d45a951036d1749573d 2023-07-18 12:15:11,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 22f15aea30404d45a951036d1749573d 2023-07-18 12:15:11,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/info/22f15aea30404d45a951036d1749573d, entries=3, sequenceid=8, filesize=5.0 K 2023-07-18 12:15:11,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 626837fa249245c8d0bc1b007ca8cbf6 in 92ms, sequenceid=8, compaction requested=false 2023-07-18 12:15:11,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/rsgroup/c46fe29c6ff7902355765deca34d47a9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-18 12:15:11,539 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/.tmp/rep_barrier/279d0b3897554bfaaca8a37264cd8af9 2023-07-18 12:15:11,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:11,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:11,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c46fe29c6ff7902355765deca34d47a9: 2023-07-18 12:15:11,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689682509447.c46fe29c6ff7902355765deca34d47a9. 2023-07-18 12:15:11,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/namespace/626837fa249245c8d0bc1b007ca8cbf6/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-18 12:15:11,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:11,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 626837fa249245c8d0bc1b007ca8cbf6: 2023-07-18 12:15:11,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689682509282.626837fa249245c8d0bc1b007ca8cbf6. 2023-07-18 12:15:11,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 279d0b3897554bfaaca8a37264cd8af9 2023-07-18 12:15:11,557 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/.tmp/table/67600eb5b8944e7898e1308dc2f751ec 2023-07-18 12:15:11,562 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 67600eb5b8944e7898e1308dc2f751ec 2023-07-18 12:15:11,563 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/.tmp/info/379b4772289b4102a5514362392efed4 as hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/info/379b4772289b4102a5514362392efed4 2023-07-18 12:15:11,568 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 379b4772289b4102a5514362392efed4 2023-07-18 12:15:11,569 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/info/379b4772289b4102a5514362392efed4, entries=32, sequenceid=31, filesize=8.5 K 2023-07-18 12:15:11,570 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/.tmp/rep_barrier/279d0b3897554bfaaca8a37264cd8af9 as hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/rep_barrier/279d0b3897554bfaaca8a37264cd8af9 2023-07-18 12:15:11,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 279d0b3897554bfaaca8a37264cd8af9 2023-07-18 12:15:11,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/rep_barrier/279d0b3897554bfaaca8a37264cd8af9, entries=1, sequenceid=31, filesize=4.9 K 2023-07-18 12:15:11,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/.tmp/table/67600eb5b8944e7898e1308dc2f751ec as hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/table/67600eb5b8944e7898e1308dc2f751ec 2023-07-18 12:15:11,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 67600eb5b8944e7898e1308dc2f751ec 2023-07-18 12:15:11,583 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/table/67600eb5b8944e7898e1308dc2f751ec, entries=8, sequenceid=31, filesize=5.2 K 2023-07-18 12:15:11,583 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 138ms, sequenceid=31, compaction requested=false 2023-07-18 12:15:11,593 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-18 12:15:11,593 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:11,593 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:11,593 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:15:11,593 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:11,638 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38273,1689682508528; all regions closed. 2023-07-18 12:15:11,638 DEBUG [RS:2;jenkins-hbase4:38273] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 12:15:11,638 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40697,1689682508182; all regions closed. 2023-07-18 12:15:11,638 DEBUG [RS:0;jenkins-hbase4:40697] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 12:15:11,643 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35407,1689682508346; all regions closed. 2023-07-18 12:15:11,643 DEBUG [RS:1;jenkins-hbase4:35407] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 12:15:11,650 DEBUG [RS:0;jenkins-hbase4:40697] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs 2023-07-18 12:15:11,650 INFO [RS:0;jenkins-hbase4:40697] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40697%2C1689682508182:(num 1689682509090) 2023-07-18 12:15:11,650 DEBUG [RS:0;jenkins-hbase4:40697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,651 INFO [RS:0;jenkins-hbase4:40697] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:11,651 DEBUG [RS:2;jenkins-hbase4:38273] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs 2023-07-18 12:15:11,651 INFO [RS:0;jenkins-hbase4:40697] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:11,651 INFO [RS:2;jenkins-hbase4:38273] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38273%2C1689682508528:(num 1689682509089) 2023-07-18 12:15:11,651 INFO [RS:0;jenkins-hbase4:40697] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:11,651 INFO [RS:0;jenkins-hbase4:40697] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:11,651 INFO [RS:0;jenkins-hbase4:40697] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:11,651 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:11,651 DEBUG [RS:2;jenkins-hbase4:38273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,652 INFO [RS:2;jenkins-hbase4:38273] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:11,652 INFO [RS:2;jenkins-hbase4:38273] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:11,652 INFO [RS:2;jenkins-hbase4:38273] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:11,652 INFO [RS:2;jenkins-hbase4:38273] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:11,652 INFO [RS:2;jenkins-hbase4:38273] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:11,652 INFO [RS:0;jenkins-hbase4:40697] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40697 2023-07-18 12:15:11,652 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:11,653 INFO [RS:2;jenkins-hbase4:38273] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38273 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:11,657 DEBUG [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:11,657 INFO [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35407%2C1689682508346.meta:.meta(num 1689682509228) 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40697,1689682508182 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:11,657 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38273,1689682508528 2023-07-18 12:15:11,658 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38273,1689682508528] 2023-07-18 12:15:11,658 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38273,1689682508528; numProcessing=1 2023-07-18 12:15:11,661 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38273,1689682508528 already deleted, retry=false 2023-07-18 12:15:11,661 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38273,1689682508528 expired; onlineServers=2 2023-07-18 12:15:11,661 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40697,1689682508182] 2023-07-18 12:15:11,661 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40697,1689682508182; numProcessing=2 2023-07-18 12:15:11,663 DEBUG [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/oldWALs 2023-07-18 12:15:11,663 INFO [RS:1;jenkins-hbase4:35407] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35407%2C1689682508346:(num 1689682509090) 2023-07-18 12:15:11,663 DEBUG [RS:1;jenkins-hbase4:35407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,663 INFO [RS:1;jenkins-hbase4:35407] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:11,663 INFO [RS:1;jenkins-hbase4:35407] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:11,663 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:11,664 INFO [RS:1;jenkins-hbase4:35407] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35407 2023-07-18 12:15:11,760 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:11,760 INFO [RS:0;jenkins-hbase4:40697] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40697,1689682508182; zookeeper connection closed. 2023-07-18 12:15:11,760 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:40697-0x101785b7bbc0001, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:11,761 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d946daf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d946daf 2023-07-18 12:15:11,761 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:11,761 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35407,1689682508346 2023-07-18 12:15:11,761 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40697,1689682508182 already deleted, retry=false 2023-07-18 12:15:11,761 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40697,1689682508182 expired; onlineServers=1 2023-07-18 12:15:11,762 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35407,1689682508346] 2023-07-18 12:15:11,762 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35407,1689682508346; numProcessing=3 2023-07-18 12:15:11,764 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35407,1689682508346 already deleted, retry=false 2023-07-18 12:15:11,764 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35407,1689682508346 expired; onlineServers=0 2023-07-18 12:15:11,764 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35371,1689682507989' ***** 2023-07-18 12:15:11,764 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 12:15:11,765 DEBUG [M:0;jenkins-hbase4:35371] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12e86691, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:11,765 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:11,767 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:11,767 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:11,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:11,768 INFO [M:0;jenkins-hbase4:35371] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@15e5c50{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 12:15:11,768 INFO [M:0;jenkins-hbase4:35371] server.AbstractConnector(383): Stopped ServerConnector@3bdbfe7c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:11,768 INFO [M:0;jenkins-hbase4:35371] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:11,768 INFO [M:0;jenkins-hbase4:35371] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3bee122e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:11,769 INFO [M:0;jenkins-hbase4:35371] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@686f0631{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:11,769 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35371,1689682507989 2023-07-18 12:15:11,769 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35371,1689682507989; all regions closed. 2023-07-18 12:15:11,769 DEBUG [M:0;jenkins-hbase4:35371] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:11,769 INFO [M:0;jenkins-hbase4:35371] master.HMaster(1491): Stopping master jetty server 2023-07-18 12:15:11,770 INFO [M:0;jenkins-hbase4:35371] server.AbstractConnector(383): Stopped ServerConnector@72d41635{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:11,770 DEBUG [M:0;jenkins-hbase4:35371] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 12:15:11,770 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 12:15:11,770 DEBUG [M:0;jenkins-hbase4:35371] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 12:15:11,770 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682508847] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682508847,5,FailOnTimeoutGroup] 2023-07-18 12:15:11,770 INFO [M:0;jenkins-hbase4:35371] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 12:15:11,770 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682508847] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682508847,5,FailOnTimeoutGroup] 2023-07-18 12:15:11,771 INFO [M:0;jenkins-hbase4:35371] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 12:15:11,772 INFO [M:0;jenkins-hbase4:35371] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:11,772 DEBUG [M:0;jenkins-hbase4:35371] master.HMaster(1512): Stopping service threads 2023-07-18 12:15:11,772 INFO [M:0;jenkins-hbase4:35371] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 12:15:11,772 ERROR [M:0;jenkins-hbase4:35371] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 12:15:11,773 INFO [M:0;jenkins-hbase4:35371] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 12:15:11,773 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 12:15:11,773 DEBUG [M:0;jenkins-hbase4:35371] zookeeper.ZKUtil(398): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 12:15:11,773 WARN [M:0;jenkins-hbase4:35371] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 12:15:11,773 INFO [M:0;jenkins-hbase4:35371] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 12:15:11,774 INFO [M:0;jenkins-hbase4:35371] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 12:15:11,774 DEBUG [M:0;jenkins-hbase4:35371] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 12:15:11,774 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:11,774 DEBUG [M:0;jenkins-hbase4:35371] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:11,774 DEBUG [M:0;jenkins-hbase4:35371] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 12:15:11,774 DEBUG [M:0;jenkins-hbase4:35371] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:11,774 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-18 12:15:11,786 INFO [M:0;jenkins-hbase4:35371] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3a6148ba2fbc45c9ab7baf61704bf6cd 2023-07-18 12:15:11,792 DEBUG [M:0;jenkins-hbase4:35371] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3a6148ba2fbc45c9ab7baf61704bf6cd as hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3a6148ba2fbc45c9ab7baf61704bf6cd 2023-07-18 12:15:11,797 INFO [M:0;jenkins-hbase4:35371] regionserver.HStore(1080): Added hdfs://localhost:42421/user/jenkins/test-data/9e3416dd-d69e-fb1c-2fcf-2bc81c635113/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3a6148ba2fbc45c9ab7baf61704bf6cd, entries=24, sequenceid=194, filesize=12.4 K 2023-07-18 12:15:11,798 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95214, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=194, compaction requested=false 2023-07-18 12:15:11,799 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:11,799 DEBUG [M:0;jenkins-hbase4:35371] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:11,803 INFO [M:0;jenkins-hbase4:35371] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 12:15:11,803 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:11,804 INFO [M:0;jenkins-hbase4:35371] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35371 2023-07-18 12:15:11,805 DEBUG [M:0;jenkins-hbase4:35371] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35371,1689682507989 already deleted, retry=false 2023-07-18 12:15:11,919 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:11,919 INFO [M:0;jenkins-hbase4:35371] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35371,1689682507989; zookeeper connection closed. 2023-07-18 12:15:11,919 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): master:35371-0x101785b7bbc0000, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:12,019 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:12,019 INFO [RS:1;jenkins-hbase4:35407] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35407,1689682508346; zookeeper connection closed. 2023-07-18 12:15:12,020 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:35407-0x101785b7bbc0002, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:12,020 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1fc007ff] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1fc007ff 2023-07-18 12:15:12,120 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:12,120 INFO [RS:2;jenkins-hbase4:38273] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38273,1689682508528; zookeeper connection closed. 2023-07-18 12:15:12,120 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): regionserver:38273-0x101785b7bbc0003, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:12,120 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@59b4e0ad] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@59b4e0ad 2023-07-18 12:15:12,120 INFO [Listener at localhost/34965] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-18 12:15:12,120 WARN [Listener at localhost/34965] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:12,124 INFO [Listener at localhost/34965] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:12,228 WARN [BP-1684751062-172.31.14.131-1689682507166 heartbeating to localhost/127.0.0.1:42421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:12,228 WARN [BP-1684751062-172.31.14.131-1689682507166 heartbeating to localhost/127.0.0.1:42421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1684751062-172.31.14.131-1689682507166 (Datanode Uuid c88077f7-c9ed-49a3-a554-c09b2623d890) service to localhost/127.0.0.1:42421 2023-07-18 12:15:12,229 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/dfs/data/data5/current/BP-1684751062-172.31.14.131-1689682507166] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:12,229 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/dfs/data/data6/current/BP-1684751062-172.31.14.131-1689682507166] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:12,230 WARN [Listener at localhost/34965] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:12,233 INFO [Listener at localhost/34965] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:12,337 WARN [BP-1684751062-172.31.14.131-1689682507166 heartbeating to localhost/127.0.0.1:42421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:12,337 WARN [BP-1684751062-172.31.14.131-1689682507166 heartbeating to localhost/127.0.0.1:42421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1684751062-172.31.14.131-1689682507166 (Datanode Uuid 7591d0cf-5049-4bdf-b641-ce854d71acd8) service to localhost/127.0.0.1:42421 2023-07-18 12:15:12,338 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/dfs/data/data3/current/BP-1684751062-172.31.14.131-1689682507166] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:12,338 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/dfs/data/data4/current/BP-1684751062-172.31.14.131-1689682507166] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:12,339 WARN [Listener at localhost/34965] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:12,342 INFO [Listener at localhost/34965] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:12,446 WARN [BP-1684751062-172.31.14.131-1689682507166 heartbeating to localhost/127.0.0.1:42421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:12,446 WARN [BP-1684751062-172.31.14.131-1689682507166 heartbeating to localhost/127.0.0.1:42421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1684751062-172.31.14.131-1689682507166 (Datanode Uuid 43a7d7db-04e6-4b5c-958e-03e13bb064a0) service to localhost/127.0.0.1:42421 2023-07-18 12:15:12,447 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/dfs/data/data1/current/BP-1684751062-172.31.14.131-1689682507166] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:12,447 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/cluster_be968901-8c8f-c86c-096d-8fe051c4bda5/dfs/data/data2/current/BP-1684751062-172.31.14.131-1689682507166] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:12,456 INFO [Listener at localhost/34965] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:12,571 INFO [Listener at localhost/34965] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 12:15:12,605 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 12:15:12,605 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 12:15:12,605 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.log.dir so I do NOT create it in target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019 2023-07-18 12:15:12,605 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b63f2ce5-51c9-9ba5-90dd-a9296492e459/hadoop.tmp.dir so I do NOT create it in target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019 2023-07-18 12:15:12,605 INFO [Listener at localhost/34965] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a, deleteOnExit=true 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/test.cache.data in system properties and HBase conf 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir in system properties and HBase conf 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 12:15:12,606 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 12:15:12,606 DEBUG [Listener at localhost/34965] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/nfs.dump.dir in system properties and HBase conf 2023-07-18 12:15:12,607 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir in system properties and HBase conf 2023-07-18 12:15:12,608 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 12:15:12,608 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 12:15:12,608 INFO [Listener at localhost/34965] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 12:15:12,612 WARN [Listener at localhost/34965] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 12:15:12,612 WARN [Listener at localhost/34965] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 12:15:12,653 WARN [Listener at localhost/34965] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:12,655 INFO [Listener at localhost/34965] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:12,660 INFO [Listener at localhost/34965] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/Jetty_localhost_35329_hdfs____.kh830e/webapp 2023-07-18 12:15:12,669 DEBUG [Listener at localhost/34965-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101785b7bbc000a, quorum=127.0.0.1:65201, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 12:15:12,669 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101785b7bbc000a, quorum=127.0.0.1:65201, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 12:15:12,759 INFO [Listener at localhost/34965] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35329 2023-07-18 12:15:12,763 WARN [Listener at localhost/34965] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 12:15:12,764 WARN [Listener at localhost/34965] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 12:15:12,802 WARN [Listener at localhost/33969] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:12,811 WARN [Listener at localhost/33969] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:15:12,813 WARN [Listener at localhost/33969] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:12,814 INFO [Listener at localhost/33969] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:12,819 INFO [Listener at localhost/33969] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/Jetty_localhost_44361_datanode____.ustij/webapp 2023-07-18 12:15:12,913 INFO [Listener at localhost/33969] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44361 2023-07-18 12:15:12,920 WARN [Listener at localhost/34137] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:12,936 WARN [Listener at localhost/34137] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:15:12,938 WARN [Listener at localhost/34137] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:12,939 INFO [Listener at localhost/34137] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:12,942 INFO [Listener at localhost/34137] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/Jetty_localhost_35299_datanode____ljpslz/webapp 2023-07-18 12:15:13,022 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4005e2197394f751: Processing first storage report for DS-e32eccea-d550-4116-9fac-59bf1f27b9bf from datanode 192c3ef7-b146-4b84-9198-0128bf8ba6e8 2023-07-18 12:15:13,022 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4005e2197394f751: from storage DS-e32eccea-d550-4116-9fac-59bf1f27b9bf node DatanodeRegistration(127.0.0.1:43721, datanodeUuid=192c3ef7-b146-4b84-9198-0128bf8ba6e8, infoPort=32971, infoSecurePort=0, ipcPort=34137, storageInfo=lv=-57;cid=testClusterID;nsid=737537786;c=1689682512614), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:13,022 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4005e2197394f751: Processing first storage report for DS-6e0d54a3-29cb-4a98-b089-010aa506b8ab from datanode 192c3ef7-b146-4b84-9198-0128bf8ba6e8 2023-07-18 12:15:13,022 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4005e2197394f751: from storage DS-6e0d54a3-29cb-4a98-b089-010aa506b8ab node DatanodeRegistration(127.0.0.1:43721, datanodeUuid=192c3ef7-b146-4b84-9198-0128bf8ba6e8, infoPort=32971, infoSecurePort=0, ipcPort=34137, storageInfo=lv=-57;cid=testClusterID;nsid=737537786;c=1689682512614), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:13,046 INFO [Listener at localhost/34137] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35299 2023-07-18 12:15:13,053 WARN [Listener at localhost/35449] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:13,066 WARN [Listener at localhost/35449] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 12:15:13,068 WARN [Listener at localhost/35449] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 12:15:13,069 INFO [Listener at localhost/35449] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 12:15:13,071 INFO [Listener at localhost/35449] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/Jetty_localhost_36075_datanode____.mmj0u/webapp 2023-07-18 12:15:13,144 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b835fd2e2217bcd: Processing first storage report for DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e from datanode 92390872-58a0-4d76-b6c4-ca8dc8a22705 2023-07-18 12:15:13,144 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b835fd2e2217bcd: from storage DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e node DatanodeRegistration(127.0.0.1:34253, datanodeUuid=92390872-58a0-4d76-b6c4-ca8dc8a22705, infoPort=38163, infoSecurePort=0, ipcPort=35449, storageInfo=lv=-57;cid=testClusterID;nsid=737537786;c=1689682512614), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:13,144 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b835fd2e2217bcd: Processing first storage report for DS-b022a8bf-431b-4202-b499-658fe77797ba from datanode 92390872-58a0-4d76-b6c4-ca8dc8a22705 2023-07-18 12:15:13,144 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b835fd2e2217bcd: from storage DS-b022a8bf-431b-4202-b499-658fe77797ba node DatanodeRegistration(127.0.0.1:34253, datanodeUuid=92390872-58a0-4d76-b6c4-ca8dc8a22705, infoPort=38163, infoSecurePort=0, ipcPort=35449, storageInfo=lv=-57;cid=testClusterID;nsid=737537786;c=1689682512614), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:13,177 INFO [Listener at localhost/35449] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36075 2023-07-18 12:15:13,184 WARN [Listener at localhost/41565] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 12:15:13,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd38f174b19ce9ec7: Processing first storage report for DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a from datanode 7bb30ffd-6a55-419b-bcce-1fbc28dc7eff 2023-07-18 12:15:13,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd38f174b19ce9ec7: from storage DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a node DatanodeRegistration(127.0.0.1:41025, datanodeUuid=7bb30ffd-6a55-419b-bcce-1fbc28dc7eff, infoPort=34191, infoSecurePort=0, ipcPort=41565, storageInfo=lv=-57;cid=testClusterID;nsid=737537786;c=1689682512614), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:13,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd38f174b19ce9ec7: Processing first storage report for DS-41ea3b97-aedf-4b57-bf4a-7cf37ca3721b from datanode 7bb30ffd-6a55-419b-bcce-1fbc28dc7eff 2023-07-18 12:15:13,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd38f174b19ce9ec7: from storage DS-41ea3b97-aedf-4b57-bf4a-7cf37ca3721b node DatanodeRegistration(127.0.0.1:41025, datanodeUuid=7bb30ffd-6a55-419b-bcce-1fbc28dc7eff, infoPort=34191, infoSecurePort=0, ipcPort=41565, storageInfo=lv=-57;cid=testClusterID;nsid=737537786;c=1689682512614), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 12:15:13,290 DEBUG [Listener at localhost/41565] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019 2023-07-18 12:15:13,292 INFO [Listener at localhost/41565] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/zookeeper_0, clientPort=49768, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 12:15:13,293 INFO [Listener at localhost/41565] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49768 2023-07-18 12:15:13,293 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,294 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,312 INFO [Listener at localhost/41565] util.FSUtils(471): Created version file at hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31 with version=8 2023-07-18 12:15:13,312 INFO [Listener at localhost/41565] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:46497/user/jenkins/test-data/883a0c2c-4c85-488c-6081-6cdf708533ae/hbase-staging 2023-07-18 12:15:13,313 DEBUG [Listener at localhost/41565] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 12:15:13,313 DEBUG [Listener at localhost/41565] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 12:15:13,314 DEBUG [Listener at localhost/41565] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 12:15:13,314 DEBUG [Listener at localhost/41565] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:13,315 INFO [Listener at localhost/41565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:13,316 INFO [Listener at localhost/41565] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41811 2023-07-18 12:15:13,316 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,317 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,318 INFO [Listener at localhost/41565] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41811 connecting to ZooKeeper ensemble=127.0.0.1:49768 2023-07-18 12:15:13,327 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:418110x0, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:13,329 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41811-0x101785b908e0000 connected 2023-07-18 12:15:13,353 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:13,354 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:13,354 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:13,355 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41811 2023-07-18 12:15:13,355 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41811 2023-07-18 12:15:13,355 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41811 2023-07-18 12:15:13,358 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41811 2023-07-18 12:15:13,358 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41811 2023-07-18 12:15:13,360 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:13,360 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:13,360 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:13,360 INFO [Listener at localhost/41565] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 12:15:13,360 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:13,361 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:13,361 INFO [Listener at localhost/41565] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:13,361 INFO [Listener at localhost/41565] http.HttpServer(1146): Jetty bound to port 41147 2023-07-18 12:15:13,361 INFO [Listener at localhost/41565] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:13,362 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,363 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b7f49a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:13,363 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,363 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@375d08da{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:13,477 INFO [Listener at localhost/41565] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:13,478 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:13,478 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:13,479 INFO [Listener at localhost/41565] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 12:15:13,479 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,480 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@569e02a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/jetty-0_0_0_0-41147-hbase-server-2_4_18-SNAPSHOT_jar-_-any-754027841399073209/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 12:15:13,481 INFO [Listener at localhost/41565] server.AbstractConnector(333): Started ServerConnector@177c20de{HTTP/1.1, (http/1.1)}{0.0.0.0:41147} 2023-07-18 12:15:13,481 INFO [Listener at localhost/41565] server.Server(415): Started @42212ms 2023-07-18 12:15:13,482 INFO [Listener at localhost/41565] master.HMaster(444): hbase.rootdir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31, hbase.cluster.distributed=false 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:13,495 INFO [Listener at localhost/41565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:13,496 INFO [Listener at localhost/41565] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44161 2023-07-18 12:15:13,496 INFO [Listener at localhost/41565] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:13,497 DEBUG [Listener at localhost/41565] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:13,498 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,499 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,501 INFO [Listener at localhost/41565] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44161 connecting to ZooKeeper ensemble=127.0.0.1:49768 2023-07-18 12:15:13,504 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:441610x0, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:13,505 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:441610x0, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:13,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44161-0x101785b908e0001 connected 2023-07-18 12:15:13,506 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:13,507 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:13,507 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44161 2023-07-18 12:15:13,507 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44161 2023-07-18 12:15:13,507 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44161 2023-07-18 12:15:13,508 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44161 2023-07-18 12:15:13,508 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44161 2023-07-18 12:15:13,509 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:13,509 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:13,510 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:13,510 INFO [Listener at localhost/41565] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:13,510 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:13,510 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:13,510 INFO [Listener at localhost/41565] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:13,511 INFO [Listener at localhost/41565] http.HttpServer(1146): Jetty bound to port 37959 2023-07-18 12:15:13,511 INFO [Listener at localhost/41565] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:13,512 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,512 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@11fafdd0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:13,512 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,513 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76b3ed90{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:13,625 INFO [Listener at localhost/41565] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:13,626 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:13,626 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:13,627 INFO [Listener at localhost/41565] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:15:13,627 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,628 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@267d594e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/jetty-0_0_0_0-37959-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2375229784263332637/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:13,630 INFO [Listener at localhost/41565] server.AbstractConnector(333): Started ServerConnector@7baa41fc{HTTP/1.1, (http/1.1)}{0.0.0.0:37959} 2023-07-18 12:15:13,630 INFO [Listener at localhost/41565] server.Server(415): Started @42360ms 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:13,642 INFO [Listener at localhost/41565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:13,643 INFO [Listener at localhost/41565] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44239 2023-07-18 12:15:13,643 INFO [Listener at localhost/41565] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:13,644 DEBUG [Listener at localhost/41565] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:13,645 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,646 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,647 INFO [Listener at localhost/41565] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44239 connecting to ZooKeeper ensemble=127.0.0.1:49768 2023-07-18 12:15:13,650 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:442390x0, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:13,651 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:442390x0, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:13,651 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44239-0x101785b908e0002 connected 2023-07-18 12:15:13,652 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:13,652 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:13,652 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44239 2023-07-18 12:15:13,653 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44239 2023-07-18 12:15:13,653 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44239 2023-07-18 12:15:13,653 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44239 2023-07-18 12:15:13,653 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44239 2023-07-18 12:15:13,655 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:13,655 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:13,655 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:13,656 INFO [Listener at localhost/41565] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:13,656 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:13,656 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:13,656 INFO [Listener at localhost/41565] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:13,657 INFO [Listener at localhost/41565] http.HttpServer(1146): Jetty bound to port 35613 2023-07-18 12:15:13,657 INFO [Listener at localhost/41565] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:13,661 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,662 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@16abae5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:13,662 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,662 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2bfbb1c9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:13,776 INFO [Listener at localhost/41565] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:13,777 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:13,777 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:13,777 INFO [Listener at localhost/41565] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:15:13,778 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,778 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@546e1439{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/jetty-0_0_0_0-35613-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5003296168271503453/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:13,781 INFO [Listener at localhost/41565] server.AbstractConnector(333): Started ServerConnector@527da838{HTTP/1.1, (http/1.1)}{0.0.0.0:35613} 2023-07-18 12:15:13,781 INFO [Listener at localhost/41565] server.Server(415): Started @42511ms 2023-07-18 12:15:13,792 INFO [Listener at localhost/41565] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:13,792 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,792 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,792 INFO [Listener at localhost/41565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:13,792 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:13,793 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:13,793 INFO [Listener at localhost/41565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:13,793 INFO [Listener at localhost/41565] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36857 2023-07-18 12:15:13,794 INFO [Listener at localhost/41565] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:13,795 DEBUG [Listener at localhost/41565] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:13,795 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,796 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,797 INFO [Listener at localhost/41565] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36857 connecting to ZooKeeper ensemble=127.0.0.1:49768 2023-07-18 12:15:13,800 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:368570x0, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:13,802 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:368570x0, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:13,802 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36857-0x101785b908e0003 connected 2023-07-18 12:15:13,802 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:13,802 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:13,803 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36857 2023-07-18 12:15:13,803 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36857 2023-07-18 12:15:13,803 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36857 2023-07-18 12:15:13,803 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36857 2023-07-18 12:15:13,804 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36857 2023-07-18 12:15:13,805 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:13,805 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:13,806 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:13,806 INFO [Listener at localhost/41565] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:13,806 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:13,806 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:13,806 INFO [Listener at localhost/41565] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:13,807 INFO [Listener at localhost/41565] http.HttpServer(1146): Jetty bound to port 41307 2023-07-18 12:15:13,807 INFO [Listener at localhost/41565] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:13,808 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,808 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ee861ba{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:13,808 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,808 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6d26fc67{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:13,935 INFO [Listener at localhost/41565] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:13,935 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:13,935 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:13,936 INFO [Listener at localhost/41565] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 12:15:13,936 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:13,937 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@37db8fa8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/jetty-0_0_0_0-41307-hbase-server-2_4_18-SNAPSHOT_jar-_-any-571142188926796576/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:13,938 INFO [Listener at localhost/41565] server.AbstractConnector(333): Started ServerConnector@24f59cfd{HTTP/1.1, (http/1.1)}{0.0.0.0:41307} 2023-07-18 12:15:13,939 INFO [Listener at localhost/41565] server.Server(415): Started @42669ms 2023-07-18 12:15:13,940 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:13,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5d79338a{HTTP/1.1, (http/1.1)}{0.0.0.0:41977} 2023-07-18 12:15:13,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42674ms 2023-07-18 12:15:13,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:13,946 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 12:15:13,946 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:13,947 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:13,947 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:13,947 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:13,947 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:13,948 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:13,949 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:15:13,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41811,1689682513314 from backup master directory 2023-07-18 12:15:13,951 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:15:13,952 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:13,952 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 12:15:13,952 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:13,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:13,969 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/hbase.id with ID: d8ce4127-c0fe-43d4-9f25-a4ffa4aa8f29 2023-07-18 12:15:13,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:13,984 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:13,998 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x22b6e6cc to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:14,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47631b85, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:14,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:14,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 12:15:14,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:14,010 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store-tmp 2023-07-18 12:15:14,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:14,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 12:15:14,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:14,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:14,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 12:15:14,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:14,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:14,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:14,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/WALs/jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:14,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41811%2C1689682513314, suffix=, logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/WALs/jenkins-hbase4.apache.org,41811,1689682513314, archiveDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/oldWALs, maxLogs=10 2023-07-18 12:15:14,043 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK] 2023-07-18 12:15:14,043 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK] 2023-07-18 12:15:14,043 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK] 2023-07-18 12:15:14,045 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/WALs/jenkins-hbase4.apache.org,41811,1689682513314/jenkins-hbase4.apache.org%2C41811%2C1689682513314.1689682514024 2023-07-18 12:15:14,045 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK], DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK], DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK]] 2023-07-18 12:15:14,045 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:14,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:14,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:14,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:14,048 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:14,050 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 12:15:14,050 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 12:15:14,050 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:14,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:14,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 12:15:14,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:14,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11057439360, jitterRate=0.0298042893409729}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:14,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:14,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 12:15:14,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 12:15:14,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 12:15:14,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 12:15:14,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 12:15:14,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 12:15:14,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 12:15:14,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 12:15:14,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 12:15:14,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 12:15:14,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 12:15:14,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 12:15:14,061 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:14,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 12:15:14,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 12:15:14,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 12:15:14,064 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:14,064 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:14,064 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:14,064 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:14,064 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:14,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41811,1689682513314, sessionid=0x101785b908e0000, setting cluster-up flag (Was=false) 2023-07-18 12:15:14,069 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:14,073 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 12:15:14,073 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:14,077 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:14,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 12:15:14,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:14,082 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.hbase-snapshot/.tmp 2023-07-18 12:15:14,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 12:15:14,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 12:15:14,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 12:15:14,085 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:14,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 12:15:14,086 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 12:15:14,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 12:15:14,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 12:15:14,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 12:15:14,097 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:14,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689682544098 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 12:15:14,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,099 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 12:15:14,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 12:15:14,100 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 12:15:14,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 12:15:14,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 12:15:14,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 12:15:14,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 12:15:14,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682514100,5,FailOnTimeoutGroup] 2023-07-18 12:15:14,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682514100,5,FailOnTimeoutGroup] 2023-07-18 12:15:14,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 12:15:14,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,101 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:14,110 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:14,111 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:14,111 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31 2023-07-18 12:15:14,118 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:14,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:15:14,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/info 2023-07-18 12:15:14,121 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:15:14,121 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:15:14,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:15:14,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:15:14,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:15:14,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/table 2023-07-18 12:15:14,130 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:15:14,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,131 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740 2023-07-18 12:15:14,131 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740 2023-07-18 12:15:14,133 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:15:14,134 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:15:14,135 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:14,136 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9686728320, jitterRate=-0.09785312414169312}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:15:14,136 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:15:14,136 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:15:14,136 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:15:14,136 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:15:14,136 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:15:14,136 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:15:14,136 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:14,136 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:15:14,137 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 12:15:14,137 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 12:15:14,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 12:15:14,138 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 12:15:14,139 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 12:15:14,141 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(951): ClusterId : d8ce4127-c0fe-43d4-9f25-a4ffa4aa8f29 2023-07-18 12:15:14,142 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:14,142 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(951): ClusterId : d8ce4127-c0fe-43d4-9f25-a4ffa4aa8f29 2023-07-18 12:15:14,142 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(951): ClusterId : d8ce4127-c0fe-43d4-9f25-a4ffa4aa8f29 2023-07-18 12:15:14,143 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:14,144 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:14,146 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:14,146 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:14,146 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:14,146 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:14,146 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:14,146 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:14,150 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:14,150 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:14,151 DEBUG [RS:0;jenkins-hbase4:44161] zookeeper.ReadOnlyZKClient(139): Connect 0x5b82237f to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:14,151 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:14,153 DEBUG [RS:1;jenkins-hbase4:44239] zookeeper.ReadOnlyZKClient(139): Connect 0x1992edaa to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:14,154 DEBUG [RS:2;jenkins-hbase4:36857] zookeeper.ReadOnlyZKClient(139): Connect 0x6d528288 to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:14,163 DEBUG [RS:1;jenkins-hbase4:44239] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c9060fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:14,163 DEBUG [RS:2;jenkins-hbase4:36857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28500146, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:14,163 DEBUG [RS:0;jenkins-hbase4:44161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5545c67f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:14,163 DEBUG [RS:2;jenkins-hbase4:36857] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1805c89d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:14,163 DEBUG [RS:1;jenkins-hbase4:44239] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b0b61e2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:14,163 DEBUG [RS:0;jenkins-hbase4:44161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@517a480f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:14,172 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44161 2023-07-18 12:15:14,172 INFO [RS:0;jenkins-hbase4:44161] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:14,172 INFO [RS:0;jenkins-hbase4:44161] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:14,172 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:14,173 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41811,1689682513314 with isa=jenkins-hbase4.apache.org/172.31.14.131:44161, startcode=1689682513494 2023-07-18 12:15:14,173 DEBUG [RS:0;jenkins-hbase4:44161] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:14,174 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34349, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:14,176 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41811] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,176 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:14,177 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 12:15:14,177 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44239 2023-07-18 12:15:14,177 INFO [RS:1;jenkins-hbase4:44239] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:14,177 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31 2023-07-18 12:15:14,177 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33969 2023-07-18 12:15:14,177 INFO [RS:1;jenkins-hbase4:44239] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:14,177 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41147 2023-07-18 12:15:14,177 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:14,178 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36857 2023-07-18 12:15:14,178 INFO [RS:2;jenkins-hbase4:36857] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:14,178 INFO [RS:2;jenkins-hbase4:36857] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:14,178 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:14,179 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:14,179 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41811,1689682513314 with isa=jenkins-hbase4.apache.org/172.31.14.131:36857, startcode=1689682513792 2023-07-18 12:15:14,179 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41811,1689682513314 with isa=jenkins-hbase4.apache.org/172.31.14.131:44239, startcode=1689682513641 2023-07-18 12:15:14,179 DEBUG [RS:2;jenkins-hbase4:36857] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:14,179 DEBUG [RS:0;jenkins-hbase4:44161] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,179 DEBUG [RS:1;jenkins-hbase4:44239] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:14,180 WARN [RS:0;jenkins-hbase4:44161] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:14,180 INFO [RS:0;jenkins-hbase4:44161] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:14,180 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,180 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44161,1689682513494] 2023-07-18 12:15:14,181 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36599, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:14,181 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46445, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:14,181 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41811] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,182 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:14,182 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 12:15:14,182 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41811] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,182 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:14,183 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 12:15:14,183 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31 2023-07-18 12:15:14,183 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33969 2023-07-18 12:15:14,183 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41147 2023-07-18 12:15:14,185 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31 2023-07-18 12:15:14,185 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33969 2023-07-18 12:15:14,185 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41147 2023-07-18 12:15:14,189 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:14,190 DEBUG [RS:2;jenkins-hbase4:36857] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,190 DEBUG [RS:1;jenkins-hbase4:44239] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,190 WARN [RS:2;jenkins-hbase4:36857] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:14,190 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36857,1689682513792] 2023-07-18 12:15:14,191 INFO [RS:2;jenkins-hbase4:36857] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:14,190 WARN [RS:1;jenkins-hbase4:44239] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:14,191 DEBUG [RS:0;jenkins-hbase4:44161] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,191 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44239,1689682513641] 2023-07-18 12:15:14,191 INFO [RS:1;jenkins-hbase4:44239] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:14,191 DEBUG [RS:0;jenkins-hbase4:44161] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,191 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,191 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,191 DEBUG [RS:0;jenkins-hbase4:44161] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,193 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:14,194 INFO [RS:0;jenkins-hbase4:44161] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:14,196 INFO [RS:0;jenkins-hbase4:44161] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:14,197 INFO [RS:0;jenkins-hbase4:44161] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:14,197 DEBUG [RS:2;jenkins-hbase4:36857] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,197 DEBUG [RS:1;jenkins-hbase4:44239] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,197 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,197 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:14,197 DEBUG [RS:2;jenkins-hbase4:36857] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,198 DEBUG [RS:1;jenkins-hbase4:44239] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,198 DEBUG [RS:2;jenkins-hbase4:36857] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,198 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,198 DEBUG [RS:1;jenkins-hbase4:44239] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:14,199 DEBUG [RS:2;jenkins-hbase4:36857] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:14,199 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,200 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,200 INFO [RS:2;jenkins-hbase4:36857] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:14,200 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,200 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:14,200 DEBUG [RS:0;jenkins-hbase4:44161] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,201 INFO [RS:1;jenkins-hbase4:44239] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:14,202 INFO [RS:2;jenkins-hbase4:36857] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:14,202 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,203 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,203 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,203 INFO [RS:1;jenkins-hbase4:44239] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:14,207 INFO [RS:2;jenkins-hbase4:36857] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:14,207 INFO [RS:1;jenkins-hbase4:44239] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:14,207 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,207 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,207 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:14,210 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:14,211 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,211 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,212 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:2;jenkins-hbase4:36857] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,213 DEBUG [RS:1;jenkins-hbase4:44239] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:14,219 INFO [RS:0;jenkins-hbase4:44161] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:14,219 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44161,1689682513494-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,220 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,220 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,220 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,223 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,223 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,223 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,231 INFO [RS:0;jenkins-hbase4:44161] regionserver.Replication(203): jenkins-hbase4.apache.org,44161,1689682513494 started 2023-07-18 12:15:14,231 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44161,1689682513494, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44161, sessionid=0x101785b908e0001 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44161,1689682513494' 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:14,232 DEBUG [RS:0;jenkins-hbase4:44161] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,233 DEBUG [RS:0;jenkins-hbase4:44161] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44161,1689682513494' 2023-07-18 12:15:14,233 DEBUG [RS:0;jenkins-hbase4:44161] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:14,233 DEBUG [RS:0;jenkins-hbase4:44161] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:14,233 INFO [RS:2;jenkins-hbase4:36857] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:14,236 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36857,1689682513792-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,238 DEBUG [RS:0;jenkins-hbase4:44161] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:14,238 INFO [RS:0;jenkins-hbase4:44161] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:15:14,238 INFO [RS:0;jenkins-hbase4:44161] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:15:14,241 INFO [RS:1;jenkins-hbase4:44239] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:14,241 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44239,1689682513641-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,249 INFO [RS:2;jenkins-hbase4:36857] regionserver.Replication(203): jenkins-hbase4.apache.org,36857,1689682513792 started 2023-07-18 12:15:14,250 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36857,1689682513792, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36857, sessionid=0x101785b908e0003 2023-07-18 12:15:14,250 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:14,250 DEBUG [RS:2;jenkins-hbase4:36857] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,250 DEBUG [RS:2;jenkins-hbase4:36857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36857,1689682513792' 2023-07-18 12:15:14,250 DEBUG [RS:2;jenkins-hbase4:36857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:14,250 DEBUG [RS:2;jenkins-hbase4:36857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36857,1689682513792' 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:14,251 DEBUG [RS:2;jenkins-hbase4:36857] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:14,251 INFO [RS:2;jenkins-hbase4:36857] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:15:14,251 INFO [RS:2;jenkins-hbase4:36857] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:15:14,257 INFO [RS:1;jenkins-hbase4:44239] regionserver.Replication(203): jenkins-hbase4.apache.org,44239,1689682513641 started 2023-07-18 12:15:14,257 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44239,1689682513641, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44239, sessionid=0x101785b908e0002 2023-07-18 12:15:14,257 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:14,258 DEBUG [RS:1;jenkins-hbase4:44239] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,258 DEBUG [RS:1;jenkins-hbase4:44239] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44239,1689682513641' 2023-07-18 12:15:14,258 DEBUG [RS:1;jenkins-hbase4:44239] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:14,258 DEBUG [RS:1;jenkins-hbase4:44239] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44239,1689682513641' 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:14,259 DEBUG [RS:1;jenkins-hbase4:44239] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:14,259 INFO [RS:1;jenkins-hbase4:44239] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:15:14,259 INFO [RS:1;jenkins-hbase4:44239] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:15:14,290 DEBUG [jenkins-hbase4:41811] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 12:15:14,290 DEBUG [jenkins-hbase4:41811] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:14,290 DEBUG [jenkins-hbase4:41811] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:14,290 DEBUG [jenkins-hbase4:41811] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:14,290 DEBUG [jenkins-hbase4:41811] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:14,290 DEBUG [jenkins-hbase4:41811] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:14,291 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44161,1689682513494, state=OPENING 2023-07-18 12:15:14,293 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 12:15:14,294 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:14,295 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44161,1689682513494}] 2023-07-18 12:15:14,295 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:15:14,340 INFO [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44161%2C1689682513494, suffix=, logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494, archiveDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs, maxLogs=32 2023-07-18 12:15:14,353 INFO [RS:2;jenkins-hbase4:36857] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36857%2C1689682513792, suffix=, logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,36857,1689682513792, archiveDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs, maxLogs=32 2023-07-18 12:15:14,359 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK] 2023-07-18 12:15:14,359 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK] 2023-07-18 12:15:14,359 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK] 2023-07-18 12:15:14,361 INFO [RS:1;jenkins-hbase4:44239] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44239%2C1689682513641, suffix=, logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44239,1689682513641, archiveDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs, maxLogs=32 2023-07-18 12:15:14,363 INFO [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494/jenkins-hbase4.apache.org%2C44161%2C1689682513494.1689682514341 2023-07-18 12:15:14,365 DEBUG [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK], DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK], DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK]] 2023-07-18 12:15:14,381 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK] 2023-07-18 12:15:14,381 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK] 2023-07-18 12:15:14,381 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK] 2023-07-18 12:15:14,383 INFO [RS:2;jenkins-hbase4:36857] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,36857,1689682513792/jenkins-hbase4.apache.org%2C36857%2C1689682513792.1689682514353 2023-07-18 12:15:14,385 DEBUG [RS:2;jenkins-hbase4:36857] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK], DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK]] 2023-07-18 12:15:14,392 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK] 2023-07-18 12:15:14,392 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK] 2023-07-18 12:15:14,392 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK] 2023-07-18 12:15:14,393 WARN [ReadOnlyZKClient-127.0.0.1:49768@0x22b6e6cc] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 12:15:14,393 INFO [RS:1;jenkins-hbase4:44239] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44239,1689682513641/jenkins-hbase4.apache.org%2C44239%2C1689682513641.1689682514361 2023-07-18 12:15:14,394 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:14,394 DEBUG [RS:1;jenkins-hbase4:44239] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK], DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK]] 2023-07-18 12:15:14,399 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:14,399 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44161] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35622 deadline: 1689682574399, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,449 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,451 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:15:14,452 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35636, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:15:14,457 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 12:15:14,457 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:14,458 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44161%2C1689682513494.meta, suffix=.meta, logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494, archiveDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs, maxLogs=32 2023-07-18 12:15:14,473 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK] 2023-07-18 12:15:14,474 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK] 2023-07-18 12:15:14,474 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK] 2023-07-18 12:15:14,476 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494/jenkins-hbase4.apache.org%2C44161%2C1689682513494.meta.1689682514459.meta 2023-07-18 12:15:14,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK], DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK]] 2023-07-18 12:15:14,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:14,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:15:14,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 12:15:14,477 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 12:15:14,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 12:15:14,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:14,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 12:15:14,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 12:15:14,478 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 12:15:14,479 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/info 2023-07-18 12:15:14,479 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/info 2023-07-18 12:15:14,480 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 12:15:14,480 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,480 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 12:15:14,481 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:15:14,481 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/rep_barrier 2023-07-18 12:15:14,481 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 12:15:14,482 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,482 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 12:15:14,482 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/table 2023-07-18 12:15:14,482 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/table 2023-07-18 12:15:14,483 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 12:15:14,483 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:14,484 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740 2023-07-18 12:15:14,485 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740 2023-07-18 12:15:14,486 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 12:15:14,488 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 12:15:14,488 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11446799360, jitterRate=0.06606626510620117}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 12:15:14,488 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 12:15:14,489 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689682514449 2023-07-18 12:15:14,493 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 12:15:14,494 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 12:15:14,494 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44161,1689682513494, state=OPEN 2023-07-18 12:15:14,496 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 12:15:14,496 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 12:15:14,497 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 12:15:14,498 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44161,1689682513494 in 202 msec 2023-07-18 12:15:14,499 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 12:15:14,499 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 360 msec 2023-07-18 12:15:14,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 415 msec 2023-07-18 12:15:14,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689682514501, completionTime=-1 2023-07-18 12:15:14,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 12:15:14,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 12:15:14,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 12:15:14,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689682574506 2023-07-18 12:15:14,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689682634506 2023-07-18 12:15:14,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41811,1689682513314-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41811,1689682513314-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41811,1689682513314-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41811, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 12:15:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:14,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 12:15:14,525 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:14,525 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 12:15:14,526 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:14,527 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:14,528 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1 empty. 2023-07-18 12:15:14,528 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:14,528 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 12:15:14,559 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:14,561 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e9bcfb400da6f5dc4aa7b8dba733d5e1, NAME => 'hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp 2023-07-18 12:15:14,702 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:14,704 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 12:15:14,706 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:14,707 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:14,708 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:14,709 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384 empty. 2023-07-18 12:15:14,709 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:14,709 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 12:15:14,720 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:14,721 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6395d69a3eb7b192943f60c70e614384, NAME => 'hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp 2023-07-18 12:15:14,732 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:14,732 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 6395d69a3eb7b192943f60c70e614384, disabling compactions & flushes 2023-07-18 12:15:14,732 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:14,732 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:14,732 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. after waiting 0 ms 2023-07-18 12:15:14,732 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:14,732 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:14,732 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 6395d69a3eb7b192943f60c70e614384: 2023-07-18 12:15:14,734 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:14,735 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682514735"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682514735"}]},"ts":"1689682514735"} 2023-07-18 12:15:14,737 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:14,738 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:14,738 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682514738"}]},"ts":"1689682514738"} 2023-07-18 12:15:14,739 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 12:15:14,742 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:14,742 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:14,742 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:14,742 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:14,742 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:14,743 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6395d69a3eb7b192943f60c70e614384, ASSIGN}] 2023-07-18 12:15:14,743 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6395d69a3eb7b192943f60c70e614384, ASSIGN 2023-07-18 12:15:14,744 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=6395d69a3eb7b192943f60c70e614384, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44161,1689682513494; forceNewPlan=false, retain=false 2023-07-18 12:15:14,749 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 12:15:14,894 INFO [jenkins-hbase4:41811] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:14,896 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=6395d69a3eb7b192943f60c70e614384, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:14,896 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682514896"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682514896"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682514896"}]},"ts":"1689682514896"} 2023-07-18 12:15:14,898 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE; OpenRegionProcedure 6395d69a3eb7b192943f60c70e614384, server=jenkins-hbase4.apache.org,44161,1689682513494}] 2023-07-18 12:15:14,980 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:14,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e9bcfb400da6f5dc4aa7b8dba733d5e1, disabling compactions & flushes 2023-07-18 12:15:14,981 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:14,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:14,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. after waiting 0 ms 2023-07-18 12:15:14,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:14,981 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:14,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e9bcfb400da6f5dc4aa7b8dba733d5e1: 2023-07-18 12:15:14,983 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:14,984 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682514984"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682514984"}]},"ts":"1689682514984"} 2023-07-18 12:15:14,985 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:14,986 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:14,986 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682514986"}]},"ts":"1689682514986"} 2023-07-18 12:15:14,987 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 12:15:14,992 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:14,992 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:14,992 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:14,992 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:14,992 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:14,993 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e9bcfb400da6f5dc4aa7b8dba733d5e1, ASSIGN}] 2023-07-18 12:15:14,993 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e9bcfb400da6f5dc4aa7b8dba733d5e1, ASSIGN 2023-07-18 12:15:14,994 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e9bcfb400da6f5dc4aa7b8dba733d5e1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44239,1689682513641; forceNewPlan=false, retain=false 2023-07-18 12:15:15,055 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6395d69a3eb7b192943f60c70e614384, NAME => 'hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. service=MultiRowMutationService 2023-07-18 12:15:15,055 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,057 INFO [StoreOpener-6395d69a3eb7b192943f60c70e614384-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,058 DEBUG [StoreOpener-6395d69a3eb7b192943f60c70e614384-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/m 2023-07-18 12:15:15,058 DEBUG [StoreOpener-6395d69a3eb7b192943f60c70e614384-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/m 2023-07-18 12:15:15,058 INFO [StoreOpener-6395d69a3eb7b192943f60c70e614384-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6395d69a3eb7b192943f60c70e614384 columnFamilyName m 2023-07-18 12:15:15,059 INFO [StoreOpener-6395d69a3eb7b192943f60c70e614384-1] regionserver.HStore(310): Store=6395d69a3eb7b192943f60c70e614384/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:15,059 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,060 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:15,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:15,064 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6395d69a3eb7b192943f60c70e614384; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@63255e7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:15,065 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6395d69a3eb7b192943f60c70e614384: 2023-07-18 12:15:15,065 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384., pid=7, masterSystemTime=1689682515051 2023-07-18 12:15:15,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:15,068 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:15,068 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=6395d69a3eb7b192943f60c70e614384, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:15,068 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689682515068"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682515068"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682515068"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682515068"}]},"ts":"1689682515068"} 2023-07-18 12:15:15,071 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-18 12:15:15,071 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; OpenRegionProcedure 6395d69a3eb7b192943f60c70e614384, server=jenkins-hbase4.apache.org,44161,1689682513494 in 172 msec 2023-07-18 12:15:15,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-18 12:15:15,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=6395d69a3eb7b192943f60c70e614384, ASSIGN in 328 msec 2023-07-18 12:15:15,073 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:15,073 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682515073"}]},"ts":"1689682515073"} 2023-07-18 12:15:15,077 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 12:15:15,080 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:15,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 378 msec 2023-07-18 12:15:15,108 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 12:15:15,108 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 12:15:15,113 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:15,113 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:15,116 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:15:15,118 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 12:15:15,144 INFO [jenkins-hbase4:41811] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:15,146 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e9bcfb400da6f5dc4aa7b8dba733d5e1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,146 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682515146"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682515146"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682515146"}]},"ts":"1689682515146"} 2023-07-18 12:15:15,148 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure e9bcfb400da6f5dc4aa7b8dba733d5e1, server=jenkins-hbase4.apache.org,44239,1689682513641}] 2023-07-18 12:15:15,301 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,301 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 12:15:15,302 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51548, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 12:15:15,306 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:15,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e9bcfb400da6f5dc4aa7b8dba733d5e1, NAME => 'hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:15,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:15,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,308 INFO [StoreOpener-e9bcfb400da6f5dc4aa7b8dba733d5e1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,309 DEBUG [StoreOpener-e9bcfb400da6f5dc4aa7b8dba733d5e1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/info 2023-07-18 12:15:15,309 DEBUG [StoreOpener-e9bcfb400da6f5dc4aa7b8dba733d5e1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/info 2023-07-18 12:15:15,309 INFO [StoreOpener-e9bcfb400da6f5dc4aa7b8dba733d5e1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e9bcfb400da6f5dc4aa7b8dba733d5e1 columnFamilyName info 2023-07-18 12:15:15,310 INFO [StoreOpener-e9bcfb400da6f5dc4aa7b8dba733d5e1-1] regionserver.HStore(310): Store=e9bcfb400da6f5dc4aa7b8dba733d5e1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:15,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:15,316 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:15,316 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e9bcfb400da6f5dc4aa7b8dba733d5e1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10955559040, jitterRate=0.020315945148468018}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:15,316 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e9bcfb400da6f5dc4aa7b8dba733d5e1: 2023-07-18 12:15:15,317 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1., pid=9, masterSystemTime=1689682515301 2023-07-18 12:15:15,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:15,321 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e9bcfb400da6f5dc4aa7b8dba733d5e1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,321 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689682515321"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682515321"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682515321"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682515321"}]},"ts":"1689682515321"} 2023-07-18 12:15:15,322 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:15,324 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-18 12:15:15,324 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure e9bcfb400da6f5dc4aa7b8dba733d5e1, server=jenkins-hbase4.apache.org,44239,1689682513641 in 175 msec 2023-07-18 12:15:15,326 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=4 2023-07-18 12:15:15,326 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e9bcfb400da6f5dc4aa7b8dba733d5e1, ASSIGN in 331 msec 2023-07-18 12:15:15,326 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:15,326 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682515326"}]},"ts":"1689682515326"} 2023-07-18 12:15:15,327 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 12:15:15,329 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:15,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 815 msec 2023-07-18 12:15:15,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 12:15:15,419 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:15,420 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:15,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:15,424 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51564, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:15,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 12:15:15,434 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:15,439 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-18 12:15:15,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 12:15:15,456 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:15,458 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-18 12:15:15,476 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 12:15:15,479 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 12:15:15,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.527sec 2023-07-18 12:15:15,480 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 12:15:15,480 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 12:15:15,480 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 12:15:15,480 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41811,1689682513314-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 12:15:15,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41811,1689682513314-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 12:15:15,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 12:15:15,542 DEBUG [Listener at localhost/41565] zookeeper.ReadOnlyZKClient(139): Connect 0x66e9ca67 to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:15,548 DEBUG [Listener at localhost/41565] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22cc8e8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:15,550 DEBUG [hconnection-0x4981e3b1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:15,552 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35644, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:15,553 INFO [Listener at localhost/41565] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:15,554 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:15,557 DEBUG [Listener at localhost/41565] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 12:15:15,559 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 12:15:15,563 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 12:15:15,563 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:15,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 12:15:15,564 DEBUG [Listener at localhost/41565] zookeeper.ReadOnlyZKClient(139): Connect 0x34993deb to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:15,570 DEBUG [Listener at localhost/41565] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72c3dca2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:15,570 INFO [Listener at localhost/41565] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:49768 2023-07-18 12:15:15,573 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:15,575 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101785b908e000a connected 2023-07-18 12:15:15,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:15,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:15,581 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 12:15:15,592 INFO [Listener at localhost/41565] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 12:15:15,592 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:15,592 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:15,592 INFO [Listener at localhost/41565] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 12:15:15,592 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 12:15:15,593 INFO [Listener at localhost/41565] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 12:15:15,593 INFO [Listener at localhost/41565] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 12:15:15,593 INFO [Listener at localhost/41565] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46447 2023-07-18 12:15:15,594 INFO [Listener at localhost/41565] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 12:15:15,595 DEBUG [Listener at localhost/41565] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 12:15:15,596 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:15,597 INFO [Listener at localhost/41565] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 12:15:15,597 INFO [Listener at localhost/41565] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46447 connecting to ZooKeeper ensemble=127.0.0.1:49768 2023-07-18 12:15:15,607 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:464470x0, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 12:15:15,610 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(162): regionserver:464470x0, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 12:15:15,611 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46447-0x101785b908e000b connected 2023-07-18 12:15:15,612 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(162): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 12:15:15,612 DEBUG [Listener at localhost/41565] zookeeper.ZKUtil(164): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 12:15:15,613 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46447 2023-07-18 12:15:15,613 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46447 2023-07-18 12:15:15,614 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46447 2023-07-18 12:15:15,618 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46447 2023-07-18 12:15:15,620 DEBUG [Listener at localhost/41565] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46447 2023-07-18 12:15:15,622 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 12:15:15,622 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 12:15:15,622 INFO [Listener at localhost/41565] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 12:15:15,623 INFO [Listener at localhost/41565] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 12:15:15,623 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 12:15:15,623 INFO [Listener at localhost/41565] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 12:15:15,623 INFO [Listener at localhost/41565] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 12:15:15,623 INFO [Listener at localhost/41565] http.HttpServer(1146): Jetty bound to port 46245 2023-07-18 12:15:15,624 INFO [Listener at localhost/41565] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 12:15:15,627 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:15,627 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53881145{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,AVAILABLE} 2023-07-18 12:15:15,627 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:15,628 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@484f4c26{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 12:15:15,741 INFO [Listener at localhost/41565] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 12:15:15,741 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 12:15:15,742 INFO [Listener at localhost/41565] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 12:15:15,742 INFO [Listener at localhost/41565] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 12:15:15,743 INFO [Listener at localhost/41565] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 12:15:15,743 INFO [Listener at localhost/41565] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2761a6f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/java.io.tmpdir/jetty-0_0_0_0-46245-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6403887138812249957/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:15,745 INFO [Listener at localhost/41565] server.AbstractConnector(333): Started ServerConnector@43dc1f6f{HTTP/1.1, (http/1.1)}{0.0.0.0:46245} 2023-07-18 12:15:15,745 INFO [Listener at localhost/41565] server.Server(415): Started @44476ms 2023-07-18 12:15:15,748 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(951): ClusterId : d8ce4127-c0fe-43d4-9f25-a4ffa4aa8f29 2023-07-18 12:15:15,749 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 12:15:15,751 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 12:15:15,751 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 12:15:15,752 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 12:15:15,753 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ReadOnlyZKClient(139): Connect 0x03359b97 to 127.0.0.1:49768 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 12:15:15,757 DEBUG [RS:3;jenkins-hbase4:46447] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@df26581, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 12:15:15,757 DEBUG [RS:3;jenkins-hbase4:46447] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9cdf0e7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:15,766 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:46447 2023-07-18 12:15:15,766 INFO [RS:3;jenkins-hbase4:46447] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 12:15:15,766 INFO [RS:3;jenkins-hbase4:46447] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 12:15:15,766 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 12:15:15,766 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41811,1689682513314 with isa=jenkins-hbase4.apache.org/172.31.14.131:46447, startcode=1689682515592 2023-07-18 12:15:15,767 DEBUG [RS:3;jenkins-hbase4:46447] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 12:15:15,769 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37345, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 12:15:15,769 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41811] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,769 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 12:15:15,770 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31 2023-07-18 12:15:15,770 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33969 2023-07-18 12:15:15,770 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41147 2023-07-18 12:15:15,779 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:15,779 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:15,779 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:15,779 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:15,779 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:15,780 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ZKUtil(162): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,780 WARN [RS:3;jenkins-hbase4:46447] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 12:15:15,780 INFO [RS:3;jenkins-hbase4:46447] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 12:15:15,780 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 12:15:15,780 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,780 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46447,1689682515592] 2023-07-18 12:15:15,780 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:15,780 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:15,780 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:15,782 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 12:15:15,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:15,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:15,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:15,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,784 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ZKUtil(162): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:15,785 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ZKUtil(162): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:15,785 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ZKUtil(162): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:15,785 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ZKUtil(162): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,786 DEBUG [RS:3;jenkins-hbase4:46447] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 12:15:15,786 INFO [RS:3;jenkins-hbase4:46447] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 12:15:15,787 INFO [RS:3;jenkins-hbase4:46447] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 12:15:15,787 INFO [RS:3;jenkins-hbase4:46447] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 12:15:15,787 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:15,787 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 12:15:15,789 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,789 DEBUG [RS:3;jenkins-hbase4:46447] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 12:15:15,790 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:15,790 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:15,790 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:15,804 INFO [RS:3;jenkins-hbase4:46447] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 12:15:15,804 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46447,1689682515592-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 12:15:15,815 INFO [RS:3;jenkins-hbase4:46447] regionserver.Replication(203): jenkins-hbase4.apache.org,46447,1689682515592 started 2023-07-18 12:15:15,815 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46447,1689682515592, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46447, sessionid=0x101785b908e000b 2023-07-18 12:15:15,815 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 12:15:15,815 DEBUG [RS:3;jenkins-hbase4:46447] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,815 DEBUG [RS:3;jenkins-hbase4:46447] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46447,1689682515592' 2023-07-18 12:15:15,815 DEBUG [RS:3;jenkins-hbase4:46447] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 12:15:15,815 DEBUG [RS:3;jenkins-hbase4:46447] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 12:15:15,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:15,816 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 12:15:15,816 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 12:15:15,816 DEBUG [RS:3;jenkins-hbase4:46447] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:15,816 DEBUG [RS:3;jenkins-hbase4:46447] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46447,1689682515592' 2023-07-18 12:15:15,816 DEBUG [RS:3;jenkins-hbase4:46447] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 12:15:15,816 DEBUG [RS:3;jenkins-hbase4:46447] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 12:15:15,817 DEBUG [RS:3;jenkins-hbase4:46447] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 12:15:15,817 INFO [RS:3;jenkins-hbase4:46447] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 12:15:15,817 INFO [RS:3;jenkins-hbase4:46447] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 12:15:15,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:15,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:15,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:15,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:15,822 DEBUG [hconnection-0x1b54179-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 12:15:15,825 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35646, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 12:15:15,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:15,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:15,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:15,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:15,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34648 deadline: 1689683715833, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:15,834 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:15,835 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:15,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:15,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:15,836 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:15,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:15,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:15,892 INFO [Listener at localhost/41565] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=565 (was 520) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 33969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: CacheReplicationMonitor(1737962395) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Server handler 1 on default port 33969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 1 on default port 35449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35449 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:42421 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1820719489_17 at /127.0.0.1:51502 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2032604803-2234 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1346711658@qtp-1866657093-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36075 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2032604803-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp310537486-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3059fcdc-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp636034015-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x22b6e6cc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x03359b97-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-c6b1b47-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f1e6b70-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1803455282@qtp-1866657093-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1175020809-2293 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x22b6e6cc-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/41565.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp636034015-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x66e9ca67-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1175020809-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@3243ee22 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682514100 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34965-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@105e13de sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4981e3b1-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@15ec4dfc[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x1992edaa-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:45654 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33969 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 1366183372@qtp-1735732416-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35329 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:65201@0x054e4bf8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1630228879@qtp-1691749253-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1405011853-2337 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1820719489_17 at /127.0.0.1:33384 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1820719489_17 at /127.0.0.1:45624 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1405011853-2338-acceptor-0@38befd68-ServerConnector@5d79338a{HTTP/1.1, (http/1.1)}{0.0.0.0:41977} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2032604803-2233-acceptor-0@16c25333-ServerConnector@177c20de{HTTP/1.1, (http/1.1)}{0.0.0.0:41147} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp636034015-2264-acceptor-0@6554bb59-ServerConnector@7baa41fc{HTTP/1.1, (http/1.1)}{0.0.0.0:37959} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46447Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44239Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x6d528288-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/41565-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 33969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33969 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4716cfda[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:65201@0x054e4bf8-SendThread(127.0.0.1:65201) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data6/current/BP-1018421467-172.31.14.131-1689682512614 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:36857-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f1e6b70-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5a6da7f9 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-a046d36-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682514100 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS:3;jenkins-hbase4:46447-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:51488 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:42421 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x5b82237f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@106c4a8a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x5b82237f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2032604803-2235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44161Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:44161 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data3/current/BP-1018421467-172.31.14.131-1689682512614 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:33410 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:65201@0x054e4bf8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33969 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x5b82237f-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1210082457_17 at /127.0.0.1:51440 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x1992edaa sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x66e9ca67 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41565.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Listener at localhost/41565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1134124973-2604-acceptor-0@2f6d0c32-ServerConnector@43dc1f6f{HTTP/1.1, (http/1.1)}{0.0.0.0:46245} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:36857 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@69823956 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x22b6e6cc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 41565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:33378 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data1/current/BP-1018421467-172.31.14.131-1689682512614 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1953528402_17 at /127.0.0.1:33344 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/41565 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:33448 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33969 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f1e6b70-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data4/current/BP-1018421467-172.31.14.131-1689682512614 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp2032604803-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1134124973-2608 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:42421 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1134124973-2606 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp636034015-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@51b16ffe java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:44239-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-569-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 632855631@qtp-1691749253-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35299 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: jenkins-hbase4:41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: qtp1405011853-2339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:41811 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1405011853-2340 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1210082457_17 at /127.0.0.1:33394 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x1b54179-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp310537486-2327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:33969 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x1992edaa-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:42421 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp310537486-2325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34965-SendThread(127.0.0.1:65201) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data5/current/BP-1018421467-172.31.14.131-1689682512614 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1134124973-2610 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:33969 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x34993deb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33969 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 199905714@qtp-2024729879-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 2 on default port 35449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 35449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server idle connection scanner for port 41565 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35371,1689682507989 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 41565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:42421 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33969 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1175020809-2295 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1953528402_17 at /127.0.0.1:45592 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1405011853-2336 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1210082457_17 at /127.0.0.1:51516 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-596cac78-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 34137 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp310537486-2324-acceptor-0@71a5b64e-ServerConnector@24f59cfd{HTTP/1.1, (http/1.1)}{0.0.0.0:41307} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36857Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1175020809-2294-acceptor-0@7f7316bc-ServerConnector@527da838{HTTP/1.1, (http/1.1)}{0.0.0.0:35613} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1134124973-2607 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1134124973-2605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-564-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31-prefix:jenkins-hbase4.apache.org,36857,1689682513792 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@42043e9b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31-prefix:jenkins-hbase4.apache.org,44161,1689682513494 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1210082457_17 at /127.0.0.1:45638 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase4:46447 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f1e6b70-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:44161-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41811,1689682513314 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data2/current/BP-1018421467-172.31.14.131-1689682512614 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:33969 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1175020809-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:49768 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x03359b97 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@c52accd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2032604803-2232 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x4f1e6b70-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x66e9ca67-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp310537486-2326 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1175020809-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1953528402_17 at /127.0.0.1:51464 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x6d528288-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp636034015-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@445cea1c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:44239 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33969 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:42421 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x03359b97-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1058617424@qtp-2024729879-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44361 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1405011853-2334 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3124a64 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1134124973-2609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:45620 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x1b54179-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2032604803-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-76eb1808-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1134124973-2603 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:42421 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 41565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:42421 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp310537486-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData-prefix:jenkins-hbase4.apache.org,41811,1689682513314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1405011853-2335 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x34993deb-SendThread(127.0.0.1:49768) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5097ffc5 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2c6d5bf9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp636034015-2263 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1908553992_17 at /127.0.0.1:51522 [Receiving block BP-1018421467-172.31.14.131-1689682512614:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp310537486-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x34993deb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31-prefix:jenkins-hbase4.apache.org,44161,1689682513494.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2032604803-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31-prefix:jenkins-hbase4.apache.org,44239,1689682513641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41565.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp310537486-2323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp636034015-2265 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-565-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33969 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1018421467-172.31.14.131-1689682512614:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:42421 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 1970280119@qtp-1735732416-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp636034015-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 33969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x4f1e6b70-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:49768): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 4 on default port 34137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49768@0x6d528288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/944149523.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@50d24b38[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@22bbfe23 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1953528402_17 at /127.0.0.1:45548 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1175020809-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1405011853-2341 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f1e6b70-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1175020809-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f1e6b70-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36857 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46447 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=846 (was 822) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=432 (was 467), ProcessCount=174 (was 176), AvailableMemoryMB=4375 (was 2426) - AvailableMemoryMB LEAK? - 2023-07-18 12:15:15,895 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-18 12:15:15,912 INFO [Listener at localhost/41565] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=565, OpenFileDescriptor=846, MaxFileDescriptor=60000, SystemLoadAverage=432, ProcessCount=174, AvailableMemoryMB=4374 2023-07-18 12:15:15,912 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-18 12:15:15,913 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-18 12:15:15,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:15,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:15,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:15,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:15,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:15,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:15,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:15,919 INFO [RS:3;jenkins-hbase4:46447] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46447%2C1689682515592, suffix=, logDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,46447,1689682515592, archiveDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs, maxLogs=32 2023-07-18 12:15:15,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:15,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:15,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:15,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:15,928 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:15,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:15,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:15,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:15,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:15,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:15,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:15,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:15,942 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK] 2023-07-18 12:15:15,942 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK] 2023-07-18 12:15:15,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:15,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:15,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34648 deadline: 1689683715943, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:15,943 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:15,948 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK] 2023-07-18 12:15:15,948 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:15,950 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:15,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:15,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:15,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:15,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 12:15:15,960 INFO [RS:3;jenkins-hbase4:46447] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,46447,1689682515592/jenkins-hbase4.apache.org%2C46447%2C1689682515592.1689682515919 2023-07-18 12:15:15,960 DEBUG [RS:3;jenkins-hbase4:46447] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43721,DS-e32eccea-d550-4116-9fac-59bf1f27b9bf,DISK], DatanodeInfoWithStorage[127.0.0.1:34253,DS-43a7494f-50c3-408a-ae88-bf5e50c8bb6e,DISK], DatanodeInfoWithStorage[127.0.0.1:41025,DS-f9e9eaf7-2cdf-423b-91ab-73caca0c1a6a,DISK]] 2023-07-18 12:15:15,960 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:15,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-18 12:15:15,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 12:15:15,962 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:15,963 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:15,963 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:15,967 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 12:15:15,968 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:15,968 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d empty. 2023-07-18 12:15:15,969 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:15,969 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 12:15:15,993 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-18 12:15:15,994 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 06b19b4fcedcf2378d8c841992d20b6d, NAME => 't1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp 2023-07-18 12:15:16,015 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:16,015 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 06b19b4fcedcf2378d8c841992d20b6d, disabling compactions & flushes 2023-07-18 12:15:16,015 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,015 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,015 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. after waiting 0 ms 2023-07-18 12:15:16,015 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,015 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,015 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 06b19b4fcedcf2378d8c841992d20b6d: 2023-07-18 12:15:16,018 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 12:15:16,019 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682516019"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682516019"}]},"ts":"1689682516019"} 2023-07-18 12:15:16,020 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 12:15:16,024 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 12:15:16,025 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682516024"}]},"ts":"1689682516024"} 2023-07-18 12:15:16,026 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-18 12:15:16,030 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 12:15:16,030 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 12:15:16,030 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 12:15:16,030 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 12:15:16,030 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 12:15:16,030 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 12:15:16,030 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, ASSIGN}] 2023-07-18 12:15:16,031 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, ASSIGN 2023-07-18 12:15:16,032 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44161,1689682513494; forceNewPlan=false, retain=false 2023-07-18 12:15:16,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 12:15:16,182 INFO [jenkins-hbase4:41811] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 12:15:16,183 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=06b19b4fcedcf2378d8c841992d20b6d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:16,184 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682516183"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682516183"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682516183"}]},"ts":"1689682516183"} 2023-07-18 12:15:16,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 06b19b4fcedcf2378d8c841992d20b6d, server=jenkins-hbase4.apache.org,44161,1689682513494}] 2023-07-18 12:15:16,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 12:15:16,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 06b19b4fcedcf2378d8c841992d20b6d, NAME => 't1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 12:15:16,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 12:15:16,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,342 INFO [StoreOpener-06b19b4fcedcf2378d8c841992d20b6d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,346 DEBUG [StoreOpener-06b19b4fcedcf2378d8c841992d20b6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/cf1 2023-07-18 12:15:16,346 DEBUG [StoreOpener-06b19b4fcedcf2378d8c841992d20b6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/cf1 2023-07-18 12:15:16,346 INFO [StoreOpener-06b19b4fcedcf2378d8c841992d20b6d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 06b19b4fcedcf2378d8c841992d20b6d columnFamilyName cf1 2023-07-18 12:15:16,347 INFO [StoreOpener-06b19b4fcedcf2378d8c841992d20b6d-1] regionserver.HStore(310): Store=06b19b4fcedcf2378d8c841992d20b6d/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 12:15:16,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 12:15:16,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 06b19b4fcedcf2378d8c841992d20b6d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9600935360, jitterRate=-0.10584321618080139}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 12:15:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 06b19b4fcedcf2378d8c841992d20b6d: 2023-07-18 12:15:16,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d., pid=14, masterSystemTime=1689682516337 2023-07-18 12:15:16,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,358 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=06b19b4fcedcf2378d8c841992d20b6d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:16,359 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682516358"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689682516358"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689682516358"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689682516358"}]},"ts":"1689682516358"} 2023-07-18 12:15:16,361 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-18 12:15:16,361 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 06b19b4fcedcf2378d8c841992d20b6d, server=jenkins-hbase4.apache.org,44161,1689682513494 in 175 msec 2023-07-18 12:15:16,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 12:15:16,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, ASSIGN in 331 msec 2023-07-18 12:15:16,363 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 12:15:16,363 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682516363"}]},"ts":"1689682516363"} 2023-07-18 12:15:16,364 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-18 12:15:16,366 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 12:15:16,367 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 414 msec 2023-07-18 12:15:16,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 12:15:16,565 INFO [Listener at localhost/41565] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-18 12:15:16,565 DEBUG [Listener at localhost/41565] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-18 12:15:16,565 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:16,567 INFO [Listener at localhost/41565] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-18 12:15:16,568 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:16,568 INFO [Listener at localhost/41565] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-18 12:15:16,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 12:15:16,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 12:15:16,572 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 12:15:16,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-18 12:15:16,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:34648 deadline: 1689682576569, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-18 12:15:16,575 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:16,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=10 msec 2023-07-18 12:15:16,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:16,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:16,678 INFO [Listener at localhost/41565] client.HBaseAdmin$15(890): Started disable of t1 2023-07-18 12:15:16,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-18 12:15:16,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-18 12:15:16,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 12:15:16,682 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682516682"}]},"ts":"1689682516682"} 2023-07-18 12:15:16,684 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-18 12:15:16,685 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-18 12:15:16,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, UNASSIGN}] 2023-07-18 12:15:16,687 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, UNASSIGN 2023-07-18 12:15:16,688 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=06b19b4fcedcf2378d8c841992d20b6d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:16,688 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682516688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689682516688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689682516688"}]},"ts":"1689682516688"} 2023-07-18 12:15:16,689 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 06b19b4fcedcf2378d8c841992d20b6d, server=jenkins-hbase4.apache.org,44161,1689682513494}] 2023-07-18 12:15:16,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 12:15:16,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 06b19b4fcedcf2378d8c841992d20b6d, disabling compactions & flushes 2023-07-18 12:15:16,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. after waiting 0 ms 2023-07-18 12:15:16,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 12:15:16,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d. 2023-07-18 12:15:16,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 06b19b4fcedcf2378d8c841992d20b6d: 2023-07-18 12:15:16,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:16,849 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=06b19b4fcedcf2378d8c841992d20b6d, regionState=CLOSED 2023-07-18 12:15:16,849 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689682516848"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689682516848"}]},"ts":"1689682516848"} 2023-07-18 12:15:16,852 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 12:15:16,852 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 06b19b4fcedcf2378d8c841992d20b6d, server=jenkins-hbase4.apache.org,44161,1689682513494 in 161 msec 2023-07-18 12:15:16,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 12:15:16,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=06b19b4fcedcf2378d8c841992d20b6d, UNASSIGN in 165 msec 2023-07-18 12:15:16,855 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689682516855"}]},"ts":"1689682516855"} 2023-07-18 12:15:16,856 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-18 12:15:16,858 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-18 12:15:16,860 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 181 msec 2023-07-18 12:15:16,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 12:15:16,985 INFO [Listener at localhost/41565] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-18 12:15:16,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-18 12:15:16,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-18 12:15:16,989 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 12:15:16,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-18 12:15:16,989 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-18 12:15:16,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:16,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:16,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:16,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 12:15:17,004 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:17,007 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/cf1, FileablePath, hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/recovered.edits] 2023-07-18 12:15:17,013 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/recovered.edits/4.seqid to hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/archive/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d/recovered.edits/4.seqid 2023-07-18 12:15:17,014 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/.tmp/data/default/t1/06b19b4fcedcf2378d8c841992d20b6d 2023-07-18 12:15:17,014 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 12:15:17,017 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-18 12:15:17,019 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-18 12:15:17,021 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-18 12:15:17,022 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-18 12:15:17,022 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-18 12:15:17,022 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689682517022"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:17,024 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 12:15:17,024 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 06b19b4fcedcf2378d8c841992d20b6d, NAME => 't1,,1689682515952.06b19b4fcedcf2378d8c841992d20b6d.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 12:15:17,024 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-18 12:15:17,024 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689682517024"}]},"ts":"9223372036854775807"} 2023-07-18 12:15:17,025 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-18 12:15:17,027 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 12:15:17,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 41 msec 2023-07-18 12:15:17,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 12:15:17,095 INFO [Listener at localhost/41565] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-18 12:15:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,120 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34648 deadline: 1689683717136, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,136 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,141 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,143 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,169 INFO [Listener at localhost/41565] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 565) - Thread LEAK? -, OpenFileDescriptor=848 (was 846) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=454 (was 432) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 174), AvailableMemoryMB=4372 (was 4374) 2023-07-18 12:15:17,169 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 12:15:17,193 INFO [Listener at localhost/41565] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=848, MaxFileDescriptor=60000, SystemLoadAverage=454, ProcessCount=173, AvailableMemoryMB=4371 2023-07-18 12:15:17,193 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 12:15:17,193 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-18 12:15:17,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,208 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34648 deadline: 1689683717218, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,219 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,221 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,222 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 12:15:17,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:17,225 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-18 12:15:17,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 12:15:17,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 12:15:17,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,242 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34648 deadline: 1689683717256, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,257 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,259 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,260 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,278 INFO [Listener at localhost/41565] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=848 (was 848), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=454 (was 454), ProcessCount=173 (was 173), AvailableMemoryMB=4371 (was 4371) 2023-07-18 12:15:17,278 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-18 12:15:17,294 INFO [Listener at localhost/41565] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=848, MaxFileDescriptor=60000, SystemLoadAverage=454, ProcessCount=173, AvailableMemoryMB=4371 2023-07-18 12:15:17,294 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-18 12:15:17,294 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-18 12:15:17,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,306 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34648 deadline: 1689683717314, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,315 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,316 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,317 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,332 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34648 deadline: 1689683717340, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,340 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,342 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,343 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,364 INFO [Listener at localhost/41565] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=848 (was 848), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=454 (was 454), ProcessCount=173 (was 173), AvailableMemoryMB=4371 (was 4371) 2023-07-18 12:15:17,364 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-18 12:15:17,383 INFO [Listener at localhost/41565] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=848, MaxFileDescriptor=60000, SystemLoadAverage=454, ProcessCount=173, AvailableMemoryMB=4371 2023-07-18 12:15:17,383 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-18 12:15:17,383 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-18 12:15:17,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,396 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34648 deadline: 1689683717405, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,406 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,407 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,408 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,409 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-18 12:15:17,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-18 12:15:17,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 12:15:17,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 12:15:17,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 12:15:17,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 12:15:17,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 12:15:17,435 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 12:15:17,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:17,435 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 12:15:17,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 12:15:17,435 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 12:15:17,440 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:17,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-18 12:15:17,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 12:15:17,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 12:15:17,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:34648 deadline: 1689683717533, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-18 12:15:17,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 12:15:17,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 12:15:17,554 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 12:15:17,554 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-18 12:15:17,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 12:15:17,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-18 12:15:17,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 12:15:17,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 12:15:17,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 12:15:17,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-18 12:15:17,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,671 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,673 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 12:15:17,674 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,675 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 12:15:17,675 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 12:15:17,676 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,677 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 12:15:17,678 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-18 12:15:17,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 12:15:17,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 12:15:17,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 12:15:17,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 12:15:17,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:34648 deadline: 1689682577785, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-18 12:15:17,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-18 12:15:17,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 12:15:17,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 12:15:17,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 12:15:17,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 12:15:17,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 12:15:17,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 12:15:17,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 12:15:17,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 12:15:17,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 12:15:17,804 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 12:15:17,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 12:15:17,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 12:15:17,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 12:15:17,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 12:15:17,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 12:15:17,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41811] to rsgroup master 2023-07-18 12:15:17,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 12:15:17,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34648 deadline: 1689683717813, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. 2023-07-18 12:15:17,814 WARN [Listener at localhost/41565] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41811 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 12:15:17,816 INFO [Listener at localhost/41565] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 12:15:17,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 12:15:17,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 12:15:17,816 INFO [Listener at localhost/41565] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36857, jenkins-hbase4.apache.org:44161, jenkins-hbase4.apache.org:44239, jenkins-hbase4.apache.org:46447], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 12:15:17,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 12:15:17,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41811] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 12:15:17,834 INFO [Listener at localhost/41565] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=848 (was 848), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=454 (was 454), ProcessCount=173 (was 173), AvailableMemoryMB=4369 (was 4371) 2023-07-18 12:15:17,834 WARN [Listener at localhost/41565] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-18 12:15:17,834 INFO [Listener at localhost/41565] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 12:15:17,834 INFO [Listener at localhost/41565] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 12:15:17,834 DEBUG [Listener at localhost/41565] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66e9ca67 to 127.0.0.1:49768 2023-07-18 12:15:17,835 DEBUG [Listener at localhost/41565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,835 DEBUG [Listener at localhost/41565] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 12:15:17,835 DEBUG [Listener at localhost/41565] util.JVMClusterUtil(257): Found active master hash=1211458280, stopped=false 2023-07-18 12:15:17,835 DEBUG [Listener at localhost/41565] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 12:15:17,835 DEBUG [Listener at localhost/41565] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 12:15:17,835 INFO [Listener at localhost/41565] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:17,837 INFO [Listener at localhost/41565] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 12:15:17,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:17,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x22b6e6cc to 127.0.0.1:49768 2023-07-18 12:15:17,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:17,837 DEBUG [Listener at localhost/41565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,838 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:17,838 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44161,1689682513494' ***** 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44239,1689682513641' ***** 2023-07-18 12:15:17,838 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36857,1689682513792' ***** 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:17,838 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:17,838 INFO [Listener at localhost/41565] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46447,1689682515592' ***** 2023-07-18 12:15:17,838 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:17,839 INFO [Listener at localhost/41565] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 12:15:17,843 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:17,847 INFO [RS:0;jenkins-hbase4:44161] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@267d594e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:17,847 INFO [RS:2;jenkins-hbase4:36857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@37db8fa8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:17,847 INFO [RS:3;jenkins-hbase4:46447] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2761a6f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:17,847 INFO [RS:1;jenkins-hbase4:44239] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@546e1439{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 12:15:17,847 INFO [RS:0;jenkins-hbase4:44161] server.AbstractConnector(383): Stopped ServerConnector@7baa41fc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:17,848 INFO [RS:0;jenkins-hbase4:44161] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:17,848 INFO [RS:2;jenkins-hbase4:36857] server.AbstractConnector(383): Stopped ServerConnector@24f59cfd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:17,849 INFO [RS:0;jenkins-hbase4:44161] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76b3ed90{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:17,848 INFO [RS:3;jenkins-hbase4:46447] server.AbstractConnector(383): Stopped ServerConnector@43dc1f6f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:17,849 INFO [RS:2;jenkins-hbase4:36857] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:17,849 INFO [RS:1;jenkins-hbase4:44239] server.AbstractConnector(383): Stopped ServerConnector@527da838{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:17,849 INFO [RS:0;jenkins-hbase4:44161] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@11fafdd0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:17,849 INFO [RS:3;jenkins-hbase4:46447] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:17,850 INFO [RS:1;jenkins-hbase4:44239] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:17,850 INFO [RS:2;jenkins-hbase4:36857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6d26fc67{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:17,852 INFO [RS:1;jenkins-hbase4:44239] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2bfbb1c9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:17,852 INFO [RS:3;jenkins-hbase4:46447] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@484f4c26{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:17,853 INFO [RS:1;jenkins-hbase4:44239] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@16abae5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:17,852 INFO [RS:2;jenkins-hbase4:36857] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ee861ba{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:17,854 INFO [RS:3;jenkins-hbase4:46447] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53881145{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:17,853 INFO [RS:0;jenkins-hbase4:44161] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:17,854 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:17,854 INFO [RS:0;jenkins-hbase4:44161] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:17,854 INFO [RS:0;jenkins-hbase4:44161] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:17,854 INFO [RS:2;jenkins-hbase4:36857] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:17,854 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(3305): Received CLOSE for 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:17,855 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:17,855 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:17,855 INFO [RS:1;jenkins-hbase4:44239] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:17,855 INFO [RS:2;jenkins-hbase4:36857] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:17,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6395d69a3eb7b192943f60c70e614384, disabling compactions & flushes 2023-07-18 12:15:17,855 INFO [RS:2;jenkins-hbase4:36857] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:17,855 INFO [RS:1;jenkins-hbase4:44239] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:17,855 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:17,855 DEBUG [RS:0;jenkins-hbase4:44161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b82237f to 127.0.0.1:49768 2023-07-18 12:15:17,855 INFO [RS:1;jenkins-hbase4:44239] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:17,855 INFO [RS:3;jenkins-hbase4:46447] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 12:15:17,855 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:17,855 INFO [RS:3;jenkins-hbase4:46447] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 12:15:17,855 DEBUG [RS:2;jenkins-hbase4:36857] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6d528288 to 127.0.0.1:49768 2023-07-18 12:15:17,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:17,855 DEBUG [RS:2;jenkins-hbase4:36857] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,856 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36857,1689682513792; all regions closed. 2023-07-18 12:15:17,855 INFO [RS:3;jenkins-hbase4:46447] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 12:15:17,855 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 12:15:17,855 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(3305): Received CLOSE for e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:17,855 DEBUG [RS:0;jenkins-hbase4:44161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,856 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:17,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:17,856 DEBUG [RS:3;jenkins-hbase4:46447] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x03359b97 to 127.0.0.1:49768 2023-07-18 12:15:17,856 INFO [RS:0;jenkins-hbase4:44161] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:17,856 INFO [RS:0;jenkins-hbase4:44161] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:17,856 INFO [RS:0;jenkins-hbase4:44161] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:17,856 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 12:15:17,856 DEBUG [RS:3;jenkins-hbase4:46447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. after waiting 0 ms 2023-07-18 12:15:17,856 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46447,1689682515592; all regions closed. 2023-07-18 12:15:17,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:17,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6395d69a3eb7b192943f60c70e614384 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-18 12:15:17,859 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 12:15:17,859 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 6395d69a3eb7b192943f60c70e614384=hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384.} 2023-07-18 12:15:17,859 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 12:15:17,859 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 12:15:17,859 DEBUG [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1504): Waiting on 1588230740, 6395d69a3eb7b192943f60c70e614384 2023-07-18 12:15:17,859 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:17,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e9bcfb400da6f5dc4aa7b8dba733d5e1, disabling compactions & flushes 2023-07-18 12:15:17,859 DEBUG [RS:1;jenkins-hbase4:44239] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1992edaa to 127.0.0.1:49768 2023-07-18 12:15:17,859 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 12:15:17,859 DEBUG [RS:1;jenkins-hbase4:44239] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:17,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:17,860 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 12:15:17,859 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 12:15:17,860 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1478): Online Regions={e9bcfb400da6f5dc4aa7b8dba733d5e1=hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1.} 2023-07-18 12:15:17,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. after waiting 0 ms 2023-07-18 12:15:17,860 DEBUG [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1504): Waiting on e9bcfb400da6f5dc4aa7b8dba733d5e1 2023-07-18 12:15:17,860 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 12:15:17,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:17,860 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-18 12:15:17,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e9bcfb400da6f5dc4aa7b8dba733d5e1 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-18 12:15:17,871 DEBUG [RS:3;jenkins-hbase4:46447] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs 2023-07-18 12:15:17,871 INFO [RS:3;jenkins-hbase4:46447] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46447%2C1689682515592:(num 1689682515919) 2023-07-18 12:15:17,871 DEBUG [RS:3;jenkins-hbase4:46447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,871 INFO [RS:3;jenkins-hbase4:46447] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:17,871 INFO [RS:3;jenkins-hbase4:46447] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:17,871 INFO [RS:3;jenkins-hbase4:46447] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:17,872 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:17,872 INFO [RS:3;jenkins-hbase4:46447] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:17,872 INFO [RS:3;jenkins-hbase4:46447] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:17,874 INFO [RS:3;jenkins-hbase4:46447] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46447 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46447,1689682515592 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:17,876 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:17,877 DEBUG [RS:2;jenkins-hbase4:36857] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs 2023-07-18 12:15:17,877 INFO [RS:2;jenkins-hbase4:36857] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36857%2C1689682513792:(num 1689682514353) 2023-07-18 12:15:17,877 DEBUG [RS:2;jenkins-hbase4:36857] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:17,877 INFO [RS:2;jenkins-hbase4:36857] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:17,878 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46447,1689682515592] 2023-07-18 12:15:17,878 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46447,1689682515592; numProcessing=1 2023-07-18 12:15:17,879 INFO [RS:2;jenkins-hbase4:36857] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:17,879 INFO [RS:2;jenkins-hbase4:36857] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:17,879 INFO [RS:2;jenkins-hbase4:36857] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:17,879 INFO [RS:2;jenkins-hbase4:36857] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:17,879 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:17,880 INFO [RS:2;jenkins-hbase4:36857] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36857 2023-07-18 12:15:17,880 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46447,1689682515592 already deleted, retry=false 2023-07-18 12:15:17,880 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46447,1689682515592 expired; onlineServers=3 2023-07-18 12:15:17,881 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:17,881 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:17,881 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:17,881 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36857,1689682513792 2023-07-18 12:15:17,882 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36857,1689682513792] 2023-07-18 12:15:17,882 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36857,1689682513792; numProcessing=2 2023-07-18 12:15:17,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/.tmp/m/9e09bc1472c6412c81471f3d3d43a9b6 2023-07-18 12:15:17,883 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36857,1689682513792 already deleted, retry=false 2023-07-18 12:15:17,883 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36857,1689682513792 expired; onlineServers=2 2023-07-18 12:15:17,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e09bc1472c6412c81471f3d3d43a9b6 2023-07-18 12:15:17,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/.tmp/m/9e09bc1472c6412c81471f3d3d43a9b6 as hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/m/9e09bc1472c6412c81471f3d3d43a9b6 2023-07-18 12:15:17,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/.tmp/info/78e6517a45bd4bfa8511c4f26fd6a21b 2023-07-18 12:15:17,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/.tmp/info/0b0082ab128c4b5bab8a1702b5bf96bf 2023-07-18 12:15:17,895 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:17,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e09bc1472c6412c81471f3d3d43a9b6 2023-07-18 12:15:17,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/m/9e09bc1472c6412c81471f3d3d43a9b6, entries=12, sequenceid=29, filesize=5.4 K 2023-07-18 12:15:17,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 6395d69a3eb7b192943f60c70e614384 in 47ms, sequenceid=29, compaction requested=false 2023-07-18 12:15:17,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 12:15:17,903 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 78e6517a45bd4bfa8511c4f26fd6a21b 2023-07-18 12:15:17,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b0082ab128c4b5bab8a1702b5bf96bf 2023-07-18 12:15:17,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/.tmp/info/0b0082ab128c4b5bab8a1702b5bf96bf as hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/info/0b0082ab128c4b5bab8a1702b5bf96bf 2023-07-18 12:15:17,907 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:17,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/rsgroup/6395d69a3eb7b192943f60c70e614384/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-18 12:15:17,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:17,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:17,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6395d69a3eb7b192943f60c70e614384: 2023-07-18 12:15:17,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689682514702.6395d69a3eb7b192943f60c70e614384. 2023-07-18 12:15:17,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b0082ab128c4b5bab8a1702b5bf96bf 2023-07-18 12:15:17,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/info/0b0082ab128c4b5bab8a1702b5bf96bf, entries=3, sequenceid=9, filesize=5.0 K 2023-07-18 12:15:17,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for e9bcfb400da6f5dc4aa7b8dba733d5e1 in 55ms, sequenceid=9, compaction requested=false 2023-07-18 12:15:17,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 12:15:17,932 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:17,932 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:17,936 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/.tmp/rep_barrier/92c04ad00fd24208989bc8bdee83c6b0 2023-07-18 12:15:17,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/namespace/e9bcfb400da6f5dc4aa7b8dba733d5e1/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 12:15:17,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:17,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e9bcfb400da6f5dc4aa7b8dba733d5e1: 2023-07-18 12:15:17,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689682514514.e9bcfb400da6f5dc4aa7b8dba733d5e1. 2023-07-18 12:15:17,942 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92c04ad00fd24208989bc8bdee83c6b0 2023-07-18 12:15:17,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/.tmp/table/031553cf57d04797b3711880cc173676 2023-07-18 12:15:17,957 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 031553cf57d04797b3711880cc173676 2023-07-18 12:15:17,958 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/.tmp/info/78e6517a45bd4bfa8511c4f26fd6a21b as hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/info/78e6517a45bd4bfa8511c4f26fd6a21b 2023-07-18 12:15:17,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 78e6517a45bd4bfa8511c4f26fd6a21b 2023-07-18 12:15:17,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/info/78e6517a45bd4bfa8511c4f26fd6a21b, entries=22, sequenceid=26, filesize=7.3 K 2023-07-18 12:15:17,967 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/.tmp/rep_barrier/92c04ad00fd24208989bc8bdee83c6b0 as hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/rep_barrier/92c04ad00fd24208989bc8bdee83c6b0 2023-07-18 12:15:17,972 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92c04ad00fd24208989bc8bdee83c6b0 2023-07-18 12:15:17,972 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/rep_barrier/92c04ad00fd24208989bc8bdee83c6b0, entries=1, sequenceid=26, filesize=4.9 K 2023-07-18 12:15:17,973 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/.tmp/table/031553cf57d04797b3711880cc173676 as hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/table/031553cf57d04797b3711880cc173676 2023-07-18 12:15:17,978 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 031553cf57d04797b3711880cc173676 2023-07-18 12:15:17,978 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/table/031553cf57d04797b3711880cc173676, entries=6, sequenceid=26, filesize=5.1 K 2023-07-18 12:15:17,978 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 118ms, sequenceid=26, compaction requested=false 2023-07-18 12:15:17,979 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 12:15:17,987 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 12:15:17,988 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 12:15:17,988 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:17,988 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 12:15:17,988 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 12:15:18,036 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,036 INFO [RS:2;jenkins-hbase4:36857] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36857,1689682513792; zookeeper connection closed. 2023-07-18 12:15:18,036 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:36857-0x101785b908e0003, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,037 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@44d5580e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@44d5580e 2023-07-18 12:15:18,059 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44161,1689682513494; all regions closed. 2023-07-18 12:15:18,060 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44239,1689682513641; all regions closed. 2023-07-18 12:15:18,064 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494/jenkins-hbase4.apache.org%2C44161%2C1689682513494.meta.1689682514459.meta not finished, retry = 0 2023-07-18 12:15:18,067 DEBUG [RS:1;jenkins-hbase4:44239] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs 2023-07-18 12:15:18,067 INFO [RS:1;jenkins-hbase4:44239] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44239%2C1689682513641:(num 1689682514361) 2023-07-18 12:15:18,067 DEBUG [RS:1;jenkins-hbase4:44239] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:18,067 INFO [RS:1;jenkins-hbase4:44239] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:18,068 INFO [RS:1;jenkins-hbase4:44239] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:18,068 INFO [RS:1;jenkins-hbase4:44239] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 12:15:18,068 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:18,068 INFO [RS:1;jenkins-hbase4:44239] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 12:15:18,068 INFO [RS:1;jenkins-hbase4:44239] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 12:15:18,069 INFO [RS:1;jenkins-hbase4:44239] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44239 2023-07-18 12:15:18,071 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:18,071 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44239,1689682513641 2023-07-18 12:15:18,071 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:18,072 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44239,1689682513641] 2023-07-18 12:15:18,072 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44239,1689682513641; numProcessing=3 2023-07-18 12:15:18,073 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44239,1689682513641 already deleted, retry=false 2023-07-18 12:15:18,073 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44239,1689682513641 expired; onlineServers=1 2023-07-18 12:15:18,137 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,137 INFO [RS:3;jenkins-hbase4:46447] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46447,1689682515592; zookeeper connection closed. 2023-07-18 12:15:18,137 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:46447-0x101785b908e000b, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,137 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5cc0b7e7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5cc0b7e7 2023-07-18 12:15:18,167 DEBUG [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs 2023-07-18 12:15:18,167 INFO [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44161%2C1689682513494.meta:.meta(num 1689682514459) 2023-07-18 12:15:18,170 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/WALs/jenkins-hbase4.apache.org,44161,1689682513494/jenkins-hbase4.apache.org%2C44161%2C1689682513494.1689682514341 not finished, retry = 0 2023-07-18 12:15:18,203 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 12:15:18,203 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 12:15:18,273 DEBUG [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/oldWALs 2023-07-18 12:15:18,273 INFO [RS:0;jenkins-hbase4:44161] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44161%2C1689682513494:(num 1689682514341) 2023-07-18 12:15:18,273 DEBUG [RS:0;jenkins-hbase4:44161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:18,273 INFO [RS:0;jenkins-hbase4:44161] regionserver.LeaseManager(133): Closed leases 2023-07-18 12:15:18,273 INFO [RS:0;jenkins-hbase4:44161] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 12:15:18,273 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:18,274 INFO [RS:0;jenkins-hbase4:44161] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44161 2023-07-18 12:15:18,277 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44161,1689682513494 2023-07-18 12:15:18,277 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 12:15:18,278 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44161,1689682513494] 2023-07-18 12:15:18,278 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44161,1689682513494; numProcessing=4 2023-07-18 12:15:18,279 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44161,1689682513494 already deleted, retry=false 2023-07-18 12:15:18,279 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44161,1689682513494 expired; onlineServers=0 2023-07-18 12:15:18,279 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41811,1689682513314' ***** 2023-07-18 12:15:18,279 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 12:15:18,280 DEBUG [M:0;jenkins-hbase4:41811] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c0aff36, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 12:15:18,280 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 12:15:18,282 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 12:15:18,282 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 12:15:18,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 12:15:18,283 INFO [M:0;jenkins-hbase4:41811] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@569e02a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 12:15:18,283 INFO [M:0;jenkins-hbase4:41811] server.AbstractConnector(383): Stopped ServerConnector@177c20de{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:18,283 INFO [M:0;jenkins-hbase4:41811] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 12:15:18,284 INFO [M:0;jenkins-hbase4:41811] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@375d08da{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 12:15:18,284 INFO [M:0;jenkins-hbase4:41811] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b7f49a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/hadoop.log.dir/,STOPPED} 2023-07-18 12:15:18,284 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41811,1689682513314 2023-07-18 12:15:18,284 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41811,1689682513314; all regions closed. 2023-07-18 12:15:18,284 DEBUG [M:0;jenkins-hbase4:41811] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 12:15:18,285 INFO [M:0;jenkins-hbase4:41811] master.HMaster(1491): Stopping master jetty server 2023-07-18 12:15:18,285 INFO [M:0;jenkins-hbase4:41811] server.AbstractConnector(383): Stopped ServerConnector@5d79338a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 12:15:18,285 DEBUG [M:0;jenkins-hbase4:41811] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 12:15:18,285 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 12:15:18,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682514100] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689682514100,5,FailOnTimeoutGroup] 2023-07-18 12:15:18,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682514100] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689682514100,5,FailOnTimeoutGroup] 2023-07-18 12:15:18,285 DEBUG [M:0;jenkins-hbase4:41811] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 12:15:18,286 INFO [M:0;jenkins-hbase4:41811] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 12:15:18,286 INFO [M:0;jenkins-hbase4:41811] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 12:15:18,286 INFO [M:0;jenkins-hbase4:41811] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 12:15:18,286 DEBUG [M:0;jenkins-hbase4:41811] master.HMaster(1512): Stopping service threads 2023-07-18 12:15:18,286 INFO [M:0;jenkins-hbase4:41811] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 12:15:18,286 ERROR [M:0;jenkins-hbase4:41811] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 12:15:18,286 INFO [M:0;jenkins-hbase4:41811] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 12:15:18,286 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 12:15:18,287 DEBUG [M:0;jenkins-hbase4:41811] zookeeper.ZKUtil(398): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 12:15:18,287 WARN [M:0;jenkins-hbase4:41811] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 12:15:18,287 INFO [M:0;jenkins-hbase4:41811] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 12:15:18,287 INFO [M:0;jenkins-hbase4:41811] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 12:15:18,287 DEBUG [M:0;jenkins-hbase4:41811] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 12:15:18,287 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:18,287 DEBUG [M:0;jenkins-hbase4:41811] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:18,287 DEBUG [M:0;jenkins-hbase4:41811] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 12:15:18,287 DEBUG [M:0;jenkins-hbase4:41811] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:18,287 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-18 12:15:18,299 INFO [M:0;jenkins-hbase4:41811] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c51885d440b84b4e9ecedb4f1112e64c 2023-07-18 12:15:18,305 DEBUG [M:0;jenkins-hbase4:41811] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c51885d440b84b4e9ecedb4f1112e64c as hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c51885d440b84b4e9ecedb4f1112e64c 2023-07-18 12:15:18,309 INFO [M:0;jenkins-hbase4:41811] regionserver.HStore(1080): Added hdfs://localhost:33969/user/jenkins/test-data/aef2d273-6300-39c1-41fc-bef35b40bd31/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c51885d440b84b4e9ecedb4f1112e64c, entries=22, sequenceid=175, filesize=11.1 K 2023-07-18 12:15:18,310 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78047, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=175, compaction requested=false 2023-07-18 12:15:18,312 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 12:15:18,312 DEBUG [M:0;jenkins-hbase4:41811] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 12:15:18,316 INFO [M:0;jenkins-hbase4:41811] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 12:15:18,316 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 12:15:18,316 INFO [M:0;jenkins-hbase4:41811] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41811 2023-07-18 12:15:18,319 DEBUG [M:0;jenkins-hbase4:41811] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41811,1689682513314 already deleted, retry=false 2023-07-18 12:15:18,738 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,738 INFO [M:0;jenkins-hbase4:41811] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41811,1689682513314; zookeeper connection closed. 2023-07-18 12:15:18,738 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): master:41811-0x101785b908e0000, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,838 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,839 INFO [RS:0;jenkins-hbase4:44161] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44161,1689682513494; zookeeper connection closed. 2023-07-18 12:15:18,839 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44161-0x101785b908e0001, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,839 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6d6deffb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6d6deffb 2023-07-18 12:15:18,939 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,939 DEBUG [Listener at localhost/41565-EventThread] zookeeper.ZKWatcher(600): regionserver:44239-0x101785b908e0002, quorum=127.0.0.1:49768, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 12:15:18,939 INFO [RS:1;jenkins-hbase4:44239] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44239,1689682513641; zookeeper connection closed. 2023-07-18 12:15:18,939 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@629f9399] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@629f9399 2023-07-18 12:15:18,939 INFO [Listener at localhost/41565] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 12:15:18,940 WARN [Listener at localhost/41565] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:18,943 INFO [Listener at localhost/41565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:19,045 WARN [BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:19,045 WARN [BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1018421467-172.31.14.131-1689682512614 (Datanode Uuid 7bb30ffd-6a55-419b-bcce-1fbc28dc7eff) service to localhost/127.0.0.1:33969 2023-07-18 12:15:19,046 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data5/current/BP-1018421467-172.31.14.131-1689682512614] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:19,047 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data6/current/BP-1018421467-172.31.14.131-1689682512614] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:19,048 WARN [Listener at localhost/41565] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:19,050 INFO [Listener at localhost/41565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:19,140 WARN [BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1018421467-172.31.14.131-1689682512614 (Datanode Uuid 92390872-58a0-4d76-b6c4-ca8dc8a22705) service to localhost/127.0.0.1:33969 2023-07-18 12:15:19,141 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data3/current/BP-1018421467-172.31.14.131-1689682512614] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:19,142 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data4/current/BP-1018421467-172.31.14.131-1689682512614] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:19,155 WARN [Listener at localhost/41565] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 12:15:19,159 INFO [Listener at localhost/41565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:19,163 WARN [BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 12:15:19,163 WARN [BP-1018421467-172.31.14.131-1689682512614 heartbeating to localhost/127.0.0.1:33969] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1018421467-172.31.14.131-1689682512614 (Datanode Uuid 192c3ef7-b146-4b84-9198-0128bf8ba6e8) service to localhost/127.0.0.1:33969 2023-07-18 12:15:19,163 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data1/current/BP-1018421467-172.31.14.131-1689682512614] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:19,164 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/754a1003-b4e2-c863-bdce-f6f6a8ffd019/cluster_910b7dcf-bb4e-abbb-b1e9-1ef5a12fdd7a/dfs/data/data2/current/BP-1018421467-172.31.14.131-1689682512614] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 12:15:19,173 INFO [Listener at localhost/41565] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 12:15:19,287 INFO [Listener at localhost/41565] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 12:15:19,312 INFO [Listener at localhost/41565] hbase.HBaseTestingUtility(1293): Minicluster is down