2023-07-24 06:10:35,282 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87 2023-07-24 06:10:35,301 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-24 06:10:35,320 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 06:10:35,320 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be, deleteOnExit=true 2023-07-24 06:10:35,320 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 06:10:35,321 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/test.cache.data in system properties and HBase conf 2023-07-24 06:10:35,321 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 06:10:35,322 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir in system properties and HBase conf 2023-07-24 06:10:35,322 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 06:10:35,322 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 06:10:35,323 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 06:10:35,428 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 06:10:35,825 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 06:10:35,831 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 06:10:35,832 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 06:10:35,832 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 06:10:35,833 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 06:10:35,833 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 06:10:35,834 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 06:10:35,834 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 06:10:35,835 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 06:10:35,835 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 06:10:35,835 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/nfs.dump.dir in system properties and HBase conf 2023-07-24 06:10:35,835 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir in system properties and HBase conf 2023-07-24 06:10:35,836 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 06:10:35,836 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 06:10:35,836 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 06:10:36,325 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 06:10:36,329 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 06:10:36,674 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 06:10:36,870 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 06:10:36,885 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:10:36,929 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:10:36,979 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/Jetty_localhost_44695_hdfs____.jsvt38/webapp 2023-07-24 06:10:37,133 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44695 2023-07-24 06:10:37,146 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 06:10:37,146 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 06:10:37,706 WARN [Listener at localhost/41501] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:10:37,801 WARN [Listener at localhost/41501] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:10:37,824 WARN [Listener at localhost/41501] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:10:37,837 INFO [Listener at localhost/41501] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:10:37,848 INFO [Listener at localhost/41501] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/Jetty_localhost_41685_datanode____.7w3g1v/webapp 2023-07-24 06:10:37,999 INFO [Listener at localhost/41501] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41685 2023-07-24 06:10:38,462 WARN [Listener at localhost/44499] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:10:38,503 WARN [Listener at localhost/44499] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:10:38,510 WARN [Listener at localhost/44499] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:10:38,513 INFO [Listener at localhost/44499] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:10:38,519 INFO [Listener at localhost/44499] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/Jetty_localhost_46657_datanode____.yq9pc3/webapp 2023-07-24 06:10:38,619 INFO [Listener at localhost/44499] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46657 2023-07-24 06:10:38,633 WARN [Listener at localhost/34139] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:10:38,666 WARN [Listener at localhost/34139] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:10:38,670 WARN [Listener at localhost/34139] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:10:38,672 INFO [Listener at localhost/34139] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:10:38,677 INFO [Listener at localhost/34139] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/Jetty_localhost_33199_datanode____di2rko/webapp 2023-07-24 06:10:38,824 INFO [Listener at localhost/34139] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33199 2023-07-24 06:10:38,839 WARN [Listener at localhost/46655] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:10:39,102 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xff35d37eec77c1fb: Processing first storage report for DS-6d613184-002e-4bc1-818d-19f01e921e96 from datanode 37b2cf48-1551-4d50-81c2-781c9d3bfc61 2023-07-24 06:10:39,103 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xff35d37eec77c1fb: from storage DS-6d613184-002e-4bc1-818d-19f01e921e96 node DatanodeRegistration(127.0.0.1:36273, datanodeUuid=37b2cf48-1551-4d50-81c2-781c9d3bfc61, infoPort=46291, infoSecurePort=0, ipcPort=44499, storageInfo=lv=-57;cid=testClusterID;nsid=925961366;c=1690179036409), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x747ef89fecaeed1c: Processing first storage report for DS-bc172f24-05df-4aac-85b6-4bdb55b9237c from datanode 000834ba-e84b-4c5b-8813-4b84d1418ad4 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x747ef89fecaeed1c: from storage DS-bc172f24-05df-4aac-85b6-4bdb55b9237c node DatanodeRegistration(127.0.0.1:42505, datanodeUuid=000834ba-e84b-4c5b-8813-4b84d1418ad4, infoPort=37449, infoSecurePort=0, ipcPort=34139, storageInfo=lv=-57;cid=testClusterID;nsid=925961366;c=1690179036409), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe14d7a21dccfb0a1: Processing first storage report for DS-c6b24760-0e8e-4bab-a663-083e67e7e743 from datanode f1349270-4b00-4f7a-85d3-0ec53fd9bf86 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe14d7a21dccfb0a1: from storage DS-c6b24760-0e8e-4bab-a663-083e67e7e743 node DatanodeRegistration(127.0.0.1:43363, datanodeUuid=f1349270-4b00-4f7a-85d3-0ec53fd9bf86, infoPort=35349, infoSecurePort=0, ipcPort=46655, storageInfo=lv=-57;cid=testClusterID;nsid=925961366;c=1690179036409), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xff35d37eec77c1fb: Processing first storage report for DS-d533c227-6dce-4206-bcfd-d34c8aff6ce9 from datanode 37b2cf48-1551-4d50-81c2-781c9d3bfc61 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xff35d37eec77c1fb: from storage DS-d533c227-6dce-4206-bcfd-d34c8aff6ce9 node DatanodeRegistration(127.0.0.1:36273, datanodeUuid=37b2cf48-1551-4d50-81c2-781c9d3bfc61, infoPort=46291, infoSecurePort=0, ipcPort=44499, storageInfo=lv=-57;cid=testClusterID;nsid=925961366;c=1690179036409), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x747ef89fecaeed1c: Processing first storage report for DS-b29b0711-aa7f-4ff0-8bf1-ae14a9a84a11 from datanode 000834ba-e84b-4c5b-8813-4b84d1418ad4 2023-07-24 06:10:39,104 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x747ef89fecaeed1c: from storage DS-b29b0711-aa7f-4ff0-8bf1-ae14a9a84a11 node DatanodeRegistration(127.0.0.1:42505, datanodeUuid=000834ba-e84b-4c5b-8813-4b84d1418ad4, infoPort=37449, infoSecurePort=0, ipcPort=34139, storageInfo=lv=-57;cid=testClusterID;nsid=925961366;c=1690179036409), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 06:10:39,105 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe14d7a21dccfb0a1: Processing first storage report for DS-01fc3a25-07b5-4e0c-81ef-8f9dbdc211a9 from datanode f1349270-4b00-4f7a-85d3-0ec53fd9bf86 2023-07-24 06:10:39,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe14d7a21dccfb0a1: from storage DS-01fc3a25-07b5-4e0c-81ef-8f9dbdc211a9 node DatanodeRegistration(127.0.0.1:43363, datanodeUuid=f1349270-4b00-4f7a-85d3-0ec53fd9bf86, infoPort=35349, infoSecurePort=0, ipcPort=46655, storageInfo=lv=-57;cid=testClusterID;nsid=925961366;c=1690179036409), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:10:39,321 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87 2023-07-24 06:10:39,431 INFO [Listener at localhost/46655] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/zookeeper_0, clientPort=54990, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 06:10:39,453 INFO [Listener at localhost/46655] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54990 2023-07-24 06:10:39,466 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:39,468 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:40,203 INFO [Listener at localhost/46655] util.FSUtils(471): Created version file at hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50 with version=8 2023-07-24 06:10:40,204 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/hbase-staging 2023-07-24 06:10:40,212 DEBUG [Listener at localhost/46655] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 06:10:40,212 DEBUG [Listener at localhost/46655] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 06:10:40,212 DEBUG [Listener at localhost/46655] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 06:10:40,213 DEBUG [Listener at localhost/46655] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 06:10:40,612 INFO [Listener at localhost/46655] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 06:10:41,247 INFO [Listener at localhost/46655] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:10:41,289 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:41,290 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:41,291 INFO [Listener at localhost/46655] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:10:41,291 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:41,291 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:10:41,473 INFO [Listener at localhost/46655] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:10:41,559 DEBUG [Listener at localhost/46655] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 06:10:41,672 INFO [Listener at localhost/46655] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39303 2023-07-24 06:10:41,684 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:41,687 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:41,714 INFO [Listener at localhost/46655] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39303 connecting to ZooKeeper ensemble=127.0.0.1:54990 2023-07-24 06:10:41,769 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:393030x0, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:10:41,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39303-0x10195f3f3a20000 connected 2023-07-24 06:10:41,801 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:10:41,802 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:10:41,806 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:10:41,816 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39303 2023-07-24 06:10:41,816 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39303 2023-07-24 06:10:41,818 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39303 2023-07-24 06:10:41,819 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39303 2023-07-24 06:10:41,820 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39303 2023-07-24 06:10:41,861 INFO [Listener at localhost/46655] log.Log(170): Logging initialized @7462ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 06:10:42,009 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:10:42,010 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:10:42,011 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:10:42,013 INFO [Listener at localhost/46655] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 06:10:42,013 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:10:42,013 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:10:42,017 INFO [Listener at localhost/46655] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:10:42,099 INFO [Listener at localhost/46655] http.HttpServer(1146): Jetty bound to port 33633 2023-07-24 06:10:42,100 INFO [Listener at localhost/46655] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:10:42,132 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,135 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2db17b81{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:10:42,136 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,136 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1a079e3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:10:42,314 INFO [Listener at localhost/46655] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:10:42,327 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:10:42,328 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:10:42,330 INFO [Listener at localhost/46655] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:10:42,337 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,364 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@47177c10{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/jetty-0_0_0_0-33633-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2185810512269242083/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 06:10:42,377 INFO [Listener at localhost/46655] server.AbstractConnector(333): Started ServerConnector@cbd2559{HTTP/1.1, (http/1.1)}{0.0.0.0:33633} 2023-07-24 06:10:42,377 INFO [Listener at localhost/46655] server.Server(415): Started @7978ms 2023-07-24 06:10:42,381 INFO [Listener at localhost/46655] master.HMaster(444): hbase.rootdir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50, hbase.cluster.distributed=false 2023-07-24 06:10:42,474 INFO [Listener at localhost/46655] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:10:42,474 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,474 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,474 INFO [Listener at localhost/46655] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:10:42,474 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,475 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:10:42,481 INFO [Listener at localhost/46655] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:10:42,485 INFO [Listener at localhost/46655] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38203 2023-07-24 06:10:42,489 INFO [Listener at localhost/46655] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:10:42,500 DEBUG [Listener at localhost/46655] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:10:42,502 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:42,505 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:42,507 INFO [Listener at localhost/46655] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38203 connecting to ZooKeeper ensemble=127.0.0.1:54990 2023-07-24 06:10:42,520 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:382030x0, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:10:42,522 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:382030x0, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:10:42,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38203-0x10195f3f3a20001 connected 2023-07-24 06:10:42,529 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:10:42,531 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:10:42,532 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38203 2023-07-24 06:10:42,532 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38203 2023-07-24 06:10:42,533 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38203 2023-07-24 06:10:42,535 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38203 2023-07-24 06:10:42,536 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38203 2023-07-24 06:10:42,539 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:10:42,540 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:10:42,540 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:10:42,542 INFO [Listener at localhost/46655] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:10:42,542 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:10:42,542 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:10:42,543 INFO [Listener at localhost/46655] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:10:42,546 INFO [Listener at localhost/46655] http.HttpServer(1146): Jetty bound to port 44877 2023-07-24 06:10:42,546 INFO [Listener at localhost/46655] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:10:42,558 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,558 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21aec1e3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:10:42,559 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,559 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6289171c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:10:42,698 INFO [Listener at localhost/46655] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:10:42,700 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:10:42,701 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:10:42,701 INFO [Listener at localhost/46655] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:10:42,703 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,709 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36536101{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/jetty-0_0_0_0-44877-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2439302949303594307/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:10:42,710 INFO [Listener at localhost/46655] server.AbstractConnector(333): Started ServerConnector@5b42239d{HTTP/1.1, (http/1.1)}{0.0.0.0:44877} 2023-07-24 06:10:42,711 INFO [Listener at localhost/46655] server.Server(415): Started @8311ms 2023-07-24 06:10:42,726 INFO [Listener at localhost/46655] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:10:42,727 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,727 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,728 INFO [Listener at localhost/46655] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:10:42,728 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,728 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:10:42,728 INFO [Listener at localhost/46655] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:10:42,730 INFO [Listener at localhost/46655] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40449 2023-07-24 06:10:42,730 INFO [Listener at localhost/46655] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:10:42,736 DEBUG [Listener at localhost/46655] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:10:42,737 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:42,739 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:42,740 INFO [Listener at localhost/46655] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40449 connecting to ZooKeeper ensemble=127.0.0.1:54990 2023-07-24 06:10:42,745 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:404490x0, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:10:42,746 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:404490x0, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:10:42,747 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:404490x0, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:10:42,748 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:404490x0, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:10:42,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40449-0x10195f3f3a20002 connected 2023-07-24 06:10:42,756 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40449 2023-07-24 06:10:42,756 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40449 2023-07-24 06:10:42,756 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40449 2023-07-24 06:10:42,757 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40449 2023-07-24 06:10:42,757 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40449 2023-07-24 06:10:42,760 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:10:42,760 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:10:42,760 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:10:42,761 INFO [Listener at localhost/46655] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:10:42,761 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:10:42,761 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:10:42,762 INFO [Listener at localhost/46655] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:10:42,762 INFO [Listener at localhost/46655] http.HttpServer(1146): Jetty bound to port 36189 2023-07-24 06:10:42,763 INFO [Listener at localhost/46655] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:10:42,768 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,768 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f68fdb3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:10:42,769 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,769 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10237f46{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:10:42,920 INFO [Listener at localhost/46655] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:10:42,922 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:10:42,922 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:10:42,922 INFO [Listener at localhost/46655] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:10:42,924 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,925 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@61c85c36{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/jetty-0_0_0_0-36189-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7389573560646950172/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:10:42,927 INFO [Listener at localhost/46655] server.AbstractConnector(333): Started ServerConnector@269a05ea{HTTP/1.1, (http/1.1)}{0.0.0.0:36189} 2023-07-24 06:10:42,927 INFO [Listener at localhost/46655] server.Server(415): Started @8528ms 2023-07-24 06:10:42,943 INFO [Listener at localhost/46655] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:10:42,943 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,944 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,944 INFO [Listener at localhost/46655] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:10:42,944 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:42,944 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:10:42,944 INFO [Listener at localhost/46655] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:10:42,947 INFO [Listener at localhost/46655] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37173 2023-07-24 06:10:42,948 INFO [Listener at localhost/46655] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:10:42,951 DEBUG [Listener at localhost/46655] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:10:42,952 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:42,954 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:42,956 INFO [Listener at localhost/46655] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37173 connecting to ZooKeeper ensemble=127.0.0.1:54990 2023-07-24 06:10:42,960 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:371730x0, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:10:42,962 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37173-0x10195f3f3a20003 connected 2023-07-24 06:10:42,962 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:10:42,963 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:10:42,964 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:10:42,965 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37173 2023-07-24 06:10:42,965 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37173 2023-07-24 06:10:42,965 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37173 2023-07-24 06:10:42,969 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37173 2023-07-24 06:10:42,969 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37173 2023-07-24 06:10:42,972 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:10:42,972 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:10:42,972 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:10:42,973 INFO [Listener at localhost/46655] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:10:42,973 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:10:42,973 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:10:42,973 INFO [Listener at localhost/46655] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:10:42,974 INFO [Listener at localhost/46655] http.HttpServer(1146): Jetty bound to port 36239 2023-07-24 06:10:42,974 INFO [Listener at localhost/46655] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:10:42,976 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,977 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a081fb2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:10:42,977 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:42,977 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1946f983{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:10:43,108 INFO [Listener at localhost/46655] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:10:43,109 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:10:43,110 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:10:43,110 INFO [Listener at localhost/46655] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:10:43,112 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:43,113 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@445c6e68{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/jetty-0_0_0_0-36239-hbase-server-2_4_18-SNAPSHOT_jar-_-any-234060254754539244/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:10:43,115 INFO [Listener at localhost/46655] server.AbstractConnector(333): Started ServerConnector@49040dda{HTTP/1.1, (http/1.1)}{0.0.0.0:36239} 2023-07-24 06:10:43,115 INFO [Listener at localhost/46655] server.Server(415): Started @8716ms 2023-07-24 06:10:43,124 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:10:43,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@70f81b2{HTTP/1.1, (http/1.1)}{0.0.0.0:41201} 2023-07-24 06:10:43,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8730ms 2023-07-24 06:10:43,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:43,141 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 06:10:43,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:43,165 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:10:43,165 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:10:43,165 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:10:43,165 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:43,165 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:10:43,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:10:43,170 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39303,1690179040397 from backup master directory 2023-07-24 06:10:43,170 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:10:43,176 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:43,177 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 06:10:43,177 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:10:43,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:43,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 06:10:43,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 06:10:43,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/hbase.id with ID: 507c294c-2fca-4595-bc60-3a33e5060e50 2023-07-24 06:10:43,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:43,367 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:43,423 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x680e8523 to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:43,449 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70255042, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:43,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:43,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 06:10:43,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 06:10:43,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 06:10:43,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 06:10:43,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 06:10:43,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:10:43,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store-tmp 2023-07-24 06:10:43,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:43,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 06:10:43,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:10:43,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:10:43,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 06:10:43,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:10:43,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:10:43,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:10:43,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/WALs/jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:43,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39303%2C1690179040397, suffix=, logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/WALs/jenkins-hbase4.apache.org,39303,1690179040397, archiveDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/oldWALs, maxLogs=10 2023-07-24 06:10:43,712 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK] 2023-07-24 06:10:43,712 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK] 2023-07-24 06:10:43,712 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK] 2023-07-24 06:10:43,725 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 06:10:43,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/WALs/jenkins-hbase4.apache.org,39303,1690179040397/jenkins-hbase4.apache.org%2C39303%2C1690179040397.1690179043653 2023-07-24 06:10:43,809 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK], DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK], DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK]] 2023-07-24 06:10:43,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:43,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:43,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:10:43,818 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:10:43,904 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:10:43,915 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 06:10:43,962 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 06:10:43,983 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:43,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:10:43,993 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:10:44,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:10:44,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:44,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9726188800, jitterRate=-0.09417808055877686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:44,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:10:44,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 06:10:44,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 06:10:44,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 06:10:44,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 06:10:44,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-24 06:10:44,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 52 msec 2023-07-24 06:10:44,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 06:10:44,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 06:10:44,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 06:10:44,165 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 06:10:44,172 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 06:10:44,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 06:10:44,183 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:44,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 06:10:44,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 06:10:44,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 06:10:44,207 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:10:44,207 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:10:44,207 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:10:44,207 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:10:44,207 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:44,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39303,1690179040397, sessionid=0x10195f3f3a20000, setting cluster-up flag (Was=false) 2023-07-24 06:10:44,228 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:44,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 06:10:44,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:44,242 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:44,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 06:10:44,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:44,253 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.hbase-snapshot/.tmp 2023-07-24 06:10:44,321 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(951): ClusterId : 507c294c-2fca-4595-bc60-3a33e5060e50 2023-07-24 06:10:44,321 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(951): ClusterId : 507c294c-2fca-4595-bc60-3a33e5060e50 2023-07-24 06:10:44,321 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(951): ClusterId : 507c294c-2fca-4595-bc60-3a33e5060e50 2023-07-24 06:10:44,330 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:10:44,330 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:10:44,330 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:10:44,344 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:10:44,344 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:10:44,344 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:10:44,344 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:10:44,344 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:10:44,344 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:10:44,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 06:10:44,350 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:10:44,350 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:10:44,350 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:10:44,353 DEBUG [RS:2;jenkins-hbase4:37173] zookeeper.ReadOnlyZKClient(139): Connect 0x62d1bdd2 to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:44,353 DEBUG [RS:1;jenkins-hbase4:40449] zookeeper.ReadOnlyZKClient(139): Connect 0x0bee5f2a to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:44,353 DEBUG [RS:0;jenkins-hbase4:38203] zookeeper.ReadOnlyZKClient(139): Connect 0x014ded73 to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:44,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 06:10:44,367 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:10:44,367 DEBUG [RS:2;jenkins-hbase4:37173] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22dfb89f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:44,368 DEBUG [RS:0;jenkins-hbase4:38203] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2587cac2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:44,369 DEBUG [RS:1;jenkins-hbase4:40449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f87869c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:44,369 DEBUG [RS:2;jenkins-hbase4:37173] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21d9bf14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:10:44,369 DEBUG [RS:0;jenkins-hbase4:38203] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9a228b4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:10:44,369 DEBUG [RS:1;jenkins-hbase4:40449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42c246dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:10:44,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 06:10:44,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 06:10:44,403 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38203 2023-07-24 06:10:44,403 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37173 2023-07-24 06:10:44,404 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40449 2023-07-24 06:10:44,410 INFO [RS:2;jenkins-hbase4:37173] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:10:44,410 INFO [RS:0;jenkins-hbase4:38203] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:10:44,412 INFO [RS:0;jenkins-hbase4:38203] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:10:44,410 INFO [RS:1;jenkins-hbase4:40449] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:10:44,412 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:10:44,411 INFO [RS:2;jenkins-hbase4:37173] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:10:44,412 INFO [RS:1;jenkins-hbase4:40449] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:10:44,412 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:10:44,412 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:10:44,416 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:37173, startcode=1690179042942 2023-07-24 06:10:44,416 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:38203, startcode=1690179042473 2023-07-24 06:10:44,416 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:40449, startcode=1690179042726 2023-07-24 06:10:44,443 DEBUG [RS:0;jenkins-hbase4:38203] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:10:44,444 DEBUG [RS:1;jenkins-hbase4:40449] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:10:44,443 DEBUG [RS:2;jenkins-hbase4:37173] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:10:44,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 06:10:44,523 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44507, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:10:44,523 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52569, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:10:44,523 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33605, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:10:44,534 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:44,547 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:44,549 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:44,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 06:10:44,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 06:10:44,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 06:10:44,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:10:44,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690179074566 2023-07-24 06:10:44,569 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 06:10:44,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 06:10:44,573 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 06:10:44,575 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 06:10:44,576 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 06:10:44,576 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 06:10:44,576 WARN [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 06:10:44,576 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 06:10:44,576 WARN [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 06:10:44,576 WARN [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 06:10:44,577 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:44,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 06:10:44,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 06:10:44,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 06:10:44,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 06:10:44,585 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 06:10:44,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 06:10:44,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 06:10:44,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 06:10:44,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 06:10:44,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179044597,5,FailOnTimeoutGroup] 2023-07-24 06:10:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179044597,5,FailOnTimeoutGroup] 2023-07-24 06:10:44,598 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,598 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 06:10:44,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,639 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:44,640 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:44,640 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50 2023-07-24 06:10:44,666 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:44,669 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 06:10:44,673 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/info 2023-07-24 06:10:44,673 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 06:10:44,674 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:44,675 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 06:10:44,677 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:40449, startcode=1690179042726 2023-07-24 06:10:44,678 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:37173, startcode=1690179042942 2023-07-24 06:10:44,678 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:38203, startcode=1690179042473 2023-07-24 06:10:44,678 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:10:44,679 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 06:10:44,681 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:44,681 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 06:10:44,684 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,686 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:10:44,687 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 06:10:44,688 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/table 2023-07-24 06:10:44,689 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 06:10:44,690 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:44,692 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740 2023-07-24 06:10:44,694 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50 2023-07-24 06:10:44,694 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41501 2023-07-24 06:10:44,694 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33633 2023-07-24 06:10:44,692 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,695 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:10:44,695 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 06:10:44,695 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740 2023-07-24 06:10:44,696 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,697 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:10:44,697 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 06:10:44,699 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50 2023-07-24 06:10:44,699 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41501 2023-07-24 06:10:44,699 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33633 2023-07-24 06:10:44,700 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50 2023-07-24 06:10:44,700 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41501 2023-07-24 06:10:44,700 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33633 2023-07-24 06:10:44,710 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:10:44,711 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 06:10:44,714 DEBUG [RS:1;jenkins-hbase4:40449] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,714 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 06:10:44,714 WARN [RS:1;jenkins-hbase4:40449] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:10:44,714 INFO [RS:1;jenkins-hbase4:40449] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:10:44,715 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,715 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38203,1690179042473] 2023-07-24 06:10:44,714 DEBUG [RS:2;jenkins-hbase4:37173] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,715 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37173,1690179042942] 2023-07-24 06:10:44,715 DEBUG [RS:0;jenkins-hbase4:38203] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,716 WARN [RS:0;jenkins-hbase4:38203] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:10:44,716 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40449,1690179042726] 2023-07-24 06:10:44,715 WARN [RS:2;jenkins-hbase4:37173] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:10:44,716 INFO [RS:0;jenkins-hbase4:38203] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:10:44,717 INFO [RS:2;jenkins-hbase4:37173] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:10:44,718 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,718 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,728 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:44,731 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11011803840, jitterRate=0.025554150342941284}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 06:10:44,732 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 06:10:44,732 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 06:10:44,732 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 06:10:44,732 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 06:10:44,732 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 06:10:44,732 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 06:10:44,734 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 06:10:44,734 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 06:10:44,737 DEBUG [RS:1;jenkins-hbase4:40449] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,737 DEBUG [RS:0;jenkins-hbase4:38203] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,737 DEBUG [RS:2;jenkins-hbase4:37173] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,738 DEBUG [RS:1;jenkins-hbase4:40449] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,739 DEBUG [RS:0;jenkins-hbase4:38203] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,739 DEBUG [RS:2;jenkins-hbase4:37173] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,739 DEBUG [RS:1;jenkins-hbase4:40449] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,740 DEBUG [RS:0;jenkins-hbase4:38203] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,740 DEBUG [RS:2;jenkins-hbase4:37173] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,742 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 06:10:44,742 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 06:10:44,753 DEBUG [RS:2;jenkins-hbase4:37173] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:10:44,753 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:10:44,753 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:10:44,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 06:10:44,765 INFO [RS:2;jenkins-hbase4:37173] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:10:44,765 INFO [RS:0;jenkins-hbase4:38203] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:10:44,765 INFO [RS:1;jenkins-hbase4:40449] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:10:44,768 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 06:10:44,772 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 06:10:44,791 INFO [RS:0;jenkins-hbase4:38203] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:10:44,791 INFO [RS:2;jenkins-hbase4:37173] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:10:44,791 INFO [RS:1;jenkins-hbase4:40449] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:10:44,796 INFO [RS:1;jenkins-hbase4:40449] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:10:44,796 INFO [RS:0;jenkins-hbase4:38203] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:10:44,796 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,796 INFO [RS:2;jenkins-hbase4:37173] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:10:44,797 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,797 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,797 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:10:44,797 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:10:44,798 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:10:44,806 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,806 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,806 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,806 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,807 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:10:44,807 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:10:44,807 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:2;jenkins-hbase4:37173] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:10:44,808 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,808 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,809 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,809 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,809 DEBUG [RS:0;jenkins-hbase4:38203] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,809 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,809 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,809 DEBUG [RS:1;jenkins-hbase4:40449] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:44,809 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,809 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,811 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,811 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,811 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,812 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,812 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,812 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,829 INFO [RS:2;jenkins-hbase4:37173] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:10:44,830 INFO [RS:1;jenkins-hbase4:40449] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:10:44,831 INFO [RS:0;jenkins-hbase4:38203] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:10:44,833 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38203,1690179042473-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,833 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40449,1690179042726-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,833 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37173,1690179042942-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:44,853 INFO [RS:2;jenkins-hbase4:37173] regionserver.Replication(203): jenkins-hbase4.apache.org,37173,1690179042942 started 2023-07-24 06:10:44,853 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37173,1690179042942, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37173, sessionid=0x10195f3f3a20003 2023-07-24 06:10:44,853 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:10:44,853 DEBUG [RS:2;jenkins-hbase4:37173] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,853 DEBUG [RS:2;jenkins-hbase4:37173] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37173,1690179042942' 2023-07-24 06:10:44,853 DEBUG [RS:2;jenkins-hbase4:37173] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:10:44,854 INFO [RS:0;jenkins-hbase4:38203] regionserver.Replication(203): jenkins-hbase4.apache.org,38203,1690179042473 started 2023-07-24 06:10:44,854 INFO [RS:1;jenkins-hbase4:40449] regionserver.Replication(203): jenkins-hbase4.apache.org,40449,1690179042726 started 2023-07-24 06:10:44,854 DEBUG [RS:2;jenkins-hbase4:37173] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:10:44,854 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40449,1690179042726, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40449, sessionid=0x10195f3f3a20002 2023-07-24 06:10:44,854 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38203,1690179042473, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38203, sessionid=0x10195f3f3a20001 2023-07-24 06:10:44,855 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:10:44,855 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:10:44,855 DEBUG [RS:0;jenkins-hbase4:38203] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,855 DEBUG [RS:1;jenkins-hbase4:40449] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,856 DEBUG [RS:0;jenkins-hbase4:38203] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38203,1690179042473' 2023-07-24 06:10:44,856 DEBUG [RS:0;jenkins-hbase4:38203] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:10:44,856 DEBUG [RS:1;jenkins-hbase4:40449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40449,1690179042726' 2023-07-24 06:10:44,856 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:10:44,856 DEBUG [RS:1;jenkins-hbase4:40449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:10:44,856 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:10:44,857 DEBUG [RS:2;jenkins-hbase4:37173] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:44,857 DEBUG [RS:2;jenkins-hbase4:37173] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37173,1690179042942' 2023-07-24 06:10:44,857 DEBUG [RS:0;jenkins-hbase4:38203] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:10:44,857 DEBUG [RS:2;jenkins-hbase4:37173] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:10:44,857 DEBUG [RS:1;jenkins-hbase4:40449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:10:44,857 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:10:44,857 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:10:44,857 DEBUG [RS:2;jenkins-hbase4:37173] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:10:44,857 DEBUG [RS:0;jenkins-hbase4:38203] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:44,859 DEBUG [RS:0;jenkins-hbase4:38203] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38203,1690179042473' 2023-07-24 06:10:44,859 DEBUG [RS:0;jenkins-hbase4:38203] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:10:44,859 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:10:44,859 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:10:44,859 DEBUG [RS:1;jenkins-hbase4:40449] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:44,859 DEBUG [RS:1;jenkins-hbase4:40449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40449,1690179042726' 2023-07-24 06:10:44,860 DEBUG [RS:1;jenkins-hbase4:40449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:10:44,860 DEBUG [RS:0;jenkins-hbase4:38203] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:10:44,860 DEBUG [RS:1;jenkins-hbase4:40449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:10:44,860 DEBUG [RS:2;jenkins-hbase4:37173] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:10:44,860 INFO [RS:2;jenkins-hbase4:37173] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:10:44,860 INFO [RS:2;jenkins-hbase4:37173] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:10:44,860 DEBUG [RS:1;jenkins-hbase4:40449] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:10:44,860 DEBUG [RS:0;jenkins-hbase4:38203] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:10:44,861 INFO [RS:1;jenkins-hbase4:40449] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:10:44,861 INFO [RS:1;jenkins-hbase4:40449] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:10:44,861 INFO [RS:0;jenkins-hbase4:38203] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:10:44,861 INFO [RS:0;jenkins-hbase4:38203] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:10:44,923 DEBUG [jenkins-hbase4:39303] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 06:10:44,938 DEBUG [jenkins-hbase4:39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:44,940 DEBUG [jenkins-hbase4:39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:44,940 DEBUG [jenkins-hbase4:39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:44,940 DEBUG [jenkins-hbase4:39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:44,940 DEBUG [jenkins-hbase4:39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:44,944 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40449,1690179042726, state=OPENING 2023-07-24 06:10:44,952 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 06:10:44,954 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:44,955 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 06:10:44,958 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:44,975 INFO [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40449%2C1690179042726, suffix=, logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,40449,1690179042726, archiveDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs, maxLogs=32 2023-07-24 06:10:44,976 INFO [RS:2;jenkins-hbase4:37173] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37173%2C1690179042942, suffix=, logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,37173,1690179042942, archiveDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs, maxLogs=32 2023-07-24 06:10:44,976 INFO [RS:0;jenkins-hbase4:38203] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38203%2C1690179042473, suffix=, logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,38203,1690179042473, archiveDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs, maxLogs=32 2023-07-24 06:10:45,014 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK] 2023-07-24 06:10:45,018 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK] 2023-07-24 06:10:45,039 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK] 2023-07-24 06:10:45,040 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK] 2023-07-24 06:10:45,040 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK] 2023-07-24 06:10:45,040 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK] 2023-07-24 06:10:45,044 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK] 2023-07-24 06:10:45,044 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK] 2023-07-24 06:10:45,044 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK] 2023-07-24 06:10:45,064 INFO [RS:2;jenkins-hbase4:37173] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,37173,1690179042942/jenkins-hbase4.apache.org%2C37173%2C1690179042942.1690179044984 2023-07-24 06:10:45,065 INFO [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,40449,1690179042726/jenkins-hbase4.apache.org%2C40449%2C1690179042726.1690179044984 2023-07-24 06:10:45,065 INFO [RS:0;jenkins-hbase4:38203] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,38203,1690179042473/jenkins-hbase4.apache.org%2C38203%2C1690179042473.1690179044984 2023-07-24 06:10:45,066 DEBUG [RS:2;jenkins-hbase4:37173] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK], DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK], DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK]] 2023-07-24 06:10:45,066 DEBUG [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK], DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK], DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK]] 2023-07-24 06:10:45,067 DEBUG [RS:0;jenkins-hbase4:38203] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK], DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK], DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK]] 2023-07-24 06:10:45,145 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:45,147 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:45,151 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34642, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:45,162 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 06:10:45,163 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:10:45,166 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40449%2C1690179042726.meta, suffix=.meta, logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,40449,1690179042726, archiveDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs, maxLogs=32 2023-07-24 06:10:45,169 WARN [ReadOnlyZKClient-127.0.0.1:54990@0x680e8523] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 06:10:45,187 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK] 2023-07-24 06:10:45,188 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK] 2023-07-24 06:10:45,189 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK] 2023-07-24 06:10:45,201 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:10:45,202 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,40449,1690179042726/jenkins-hbase4.apache.org%2C40449%2C1690179042726.meta.1690179045167.meta 2023-07-24 06:10:45,203 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK], DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK], DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK]] 2023-07-24 06:10:45,203 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:45,205 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34644, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:10:45,205 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:10:45,206 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40449] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34644 deadline: 1690179105205, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:45,208 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 06:10:45,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 06:10:45,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 06:10:45,216 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:45,217 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 06:10:45,217 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 06:10:45,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 06:10:45,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/info 2023-07-24 06:10:45,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/info 2023-07-24 06:10:45,223 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 06:10:45,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:45,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 06:10:45,226 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:10:45,226 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:10:45,230 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 06:10:45,233 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:45,234 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 06:10:45,235 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/table 2023-07-24 06:10:45,235 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/table 2023-07-24 06:10:45,236 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 06:10:45,237 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:45,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740 2023-07-24 06:10:45,242 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740 2023-07-24 06:10:45,245 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 06:10:45,248 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 06:10:45,250 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10222216160, jitterRate=-0.04798193275928497}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 06:10:45,250 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 06:10:45,263 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690179045136 2023-07-24 06:10:45,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 06:10:45,283 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 06:10:45,283 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40449,1690179042726, state=OPEN 2023-07-24 06:10:45,286 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 06:10:45,286 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 06:10:45,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 06:10:45,291 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40449,1690179042726 in 328 msec 2023-07-24 06:10:45,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 06:10:45,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 538 msec 2023-07-24 06:10:45,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 920 msec 2023-07-24 06:10:45,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690179045301, completionTime=-1 2023-07-24 06:10:45,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 06:10:45,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 06:10:45,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 06:10:45,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690179105358 2023-07-24 06:10:45,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690179165358 2023-07-24 06:10:45,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 56 msec 2023-07-24 06:10:45,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39303,1690179040397-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:45,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39303,1690179040397-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:45,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39303,1690179040397-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:45,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39303, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:45,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:45,396 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 06:10:45,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 06:10:45,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:45,423 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 06:10:45,428 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:10:45,432 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:10:45,458 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,461 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35 empty. 2023-07-24 06:10:45,462 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,462 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 06:10:45,510 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:45,512 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 383d19758bb15afdbebec46f9d69da35, NAME => 'hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:45,527 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:45,527 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 383d19758bb15afdbebec46f9d69da35, disabling compactions & flushes 2023-07-24 06:10:45,527 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,527 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,527 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. after waiting 0 ms 2023-07-24 06:10:45,527 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,527 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,528 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 383d19758bb15afdbebec46f9d69da35: 2023-07-24 06:10:45,532 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:10:45,549 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179045534"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179045534"}]},"ts":"1690179045534"} 2023-07-24 06:10:45,580 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:10:45,583 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:10:45,589 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179045583"}]},"ts":"1690179045583"} 2023-07-24 06:10:45,598 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 06:10:45,603 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:45,604 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:45,604 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:45,604 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:45,604 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:45,607 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=383d19758bb15afdbebec46f9d69da35, ASSIGN}] 2023-07-24 06:10:45,611 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=383d19758bb15afdbebec46f9d69da35, ASSIGN 2023-07-24 06:10:45,615 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=383d19758bb15afdbebec46f9d69da35, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:10:45,731 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:45,734 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 06:10:45,738 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:10:45,742 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:10:45,746 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:45,747 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 empty. 2023-07-24 06:10:45,749 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:45,749 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 06:10:45,766 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:10:45,788 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=383d19758bb15afdbebec46f9d69da35, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:45,789 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179045788"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179045788"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179045788"}]},"ts":"1690179045788"} 2023-07-24 06:10:45,799 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 383d19758bb15afdbebec46f9d69da35, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:45,804 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:45,806 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0aba53baeae40b1c65e437bbd16090b8, NAME => 'hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:45,845 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:45,845 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 0aba53baeae40b1c65e437bbd16090b8, disabling compactions & flushes 2023-07-24 06:10:45,845 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:45,845 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:45,845 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. after waiting 0 ms 2023-07-24 06:10:45,845 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:45,845 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:45,845 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 0aba53baeae40b1c65e437bbd16090b8: 2023-07-24 06:10:45,859 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:10:45,861 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179045861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179045861"}]},"ts":"1690179045861"} 2023-07-24 06:10:45,872 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:10:45,875 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:10:45,875 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179045875"}]},"ts":"1690179045875"} 2023-07-24 06:10:45,878 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 06:10:45,884 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:45,884 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:45,884 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:45,884 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:45,884 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:45,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, ASSIGN}] 2023-07-24 06:10:45,888 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, ASSIGN 2023-07-24 06:10:45,890 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:10:45,967 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,967 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 383d19758bb15afdbebec46f9d69da35, NAME => 'hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:45,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:45,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,973 INFO [StoreOpener-383d19758bb15afdbebec46f9d69da35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,976 DEBUG [StoreOpener-383d19758bb15afdbebec46f9d69da35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/info 2023-07-24 06:10:45,976 DEBUG [StoreOpener-383d19758bb15afdbebec46f9d69da35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/info 2023-07-24 06:10:45,977 INFO [StoreOpener-383d19758bb15afdbebec46f9d69da35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 383d19758bb15afdbebec46f9d69da35 columnFamilyName info 2023-07-24 06:10:45,977 INFO [StoreOpener-383d19758bb15afdbebec46f9d69da35-1] regionserver.HStore(310): Store=383d19758bb15afdbebec46f9d69da35/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:45,979 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,980 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,985 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 383d19758bb15afdbebec46f9d69da35 2023-07-24 06:10:45,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:45,990 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 383d19758bb15afdbebec46f9d69da35; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10440818240, jitterRate=-0.027623027563095093}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:45,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 383d19758bb15afdbebec46f9d69da35: 2023-07-24 06:10:45,992 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35., pid=7, masterSystemTime=1690179045957 2023-07-24 06:10:45,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,996 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:10:45,997 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=383d19758bb15afdbebec46f9d69da35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:45,997 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179045996"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179045996"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179045996"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179045996"}]},"ts":"1690179045996"} 2023-07-24 06:10:46,004 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 06:10:46,004 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 383d19758bb15afdbebec46f9d69da35, server=jenkins-hbase4.apache.org,40449,1690179042726 in 201 msec 2023-07-24 06:10:46,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 06:10:46,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=383d19758bb15afdbebec46f9d69da35, ASSIGN in 398 msec 2023-07-24 06:10:46,010 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:10:46,010 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179046010"}]},"ts":"1690179046010"} 2023-07-24 06:10:46,012 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 06:10:46,016 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:10:46,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 605 msec 2023-07-24 06:10:46,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 06:10:46,027 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:10:46,027 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:46,040 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:10:46,042 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=0aba53baeae40b1c65e437bbd16090b8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,042 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179046041"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179046041"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179046041"}]},"ts":"1690179046041"} 2023-07-24 06:10:46,050 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 0aba53baeae40b1c65e437bbd16090b8, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:46,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 06:10:46,082 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:10:46,088 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 31 msec 2023-07-24 06:10:46,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 06:10:46,103 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-24 06:10:46,103 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 06:10:46,207 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,207 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:46,211 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56372, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:46,218 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:46,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0aba53baeae40b1c65e437bbd16090b8, NAME => 'hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:46,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:10:46,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. service=MultiRowMutationService 2023-07-24 06:10:46,220 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 06:10:46,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:46,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,223 INFO [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,225 DEBUG [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m 2023-07-24 06:10:46,225 DEBUG [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m 2023-07-24 06:10:46,226 INFO [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0aba53baeae40b1c65e437bbd16090b8 columnFamilyName m 2023-07-24 06:10:46,227 INFO [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] regionserver.HStore(310): Store=0aba53baeae40b1c65e437bbd16090b8/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:46,229 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,232 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:46,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:46,244 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0aba53baeae40b1c65e437bbd16090b8; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2c8c5303, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:46,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0aba53baeae40b1c65e437bbd16090b8: 2023-07-24 06:10:46,246 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8., pid=9, masterSystemTime=1690179046207 2023-07-24 06:10:46,250 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:46,251 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:46,251 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=0aba53baeae40b1c65e437bbd16090b8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,252 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179046251"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179046251"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179046251"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179046251"}]},"ts":"1690179046251"} 2023-07-24 06:10:46,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-24 06:10:46,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 0aba53baeae40b1c65e437bbd16090b8, server=jenkins-hbase4.apache.org,38203,1690179042473 in 206 msec 2023-07-24 06:10:46,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 06:10:46,265 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, ASSIGN in 375 msec 2023-07-24 06:10:46,279 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:10:46,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 189 msec 2023-07-24 06:10:46,294 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:10:46,295 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179046294"}]},"ts":"1690179046294"} 2023-07-24 06:10:46,298 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 06:10:46,303 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 06:10:46,305 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:10:46,308 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 06:10:46,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.130sec 2023-07-24 06:10:46,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 06:10:46,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 06:10:46,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 06:10:46,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39303,1690179040397-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 06:10:46,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39303,1690179040397-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 06:10:46,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 574 msec 2023-07-24 06:10:46,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 06:10:46,328 DEBUG [Listener at localhost/46655] zookeeper.ReadOnlyZKClient(139): Connect 0x42116de3 to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:46,343 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:10:46,344 DEBUG [Listener at localhost/46655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d5eab83, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:46,352 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:10:46,359 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 06:10:46,359 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 06:10:46,372 DEBUG [hconnection-0x2231fec8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:10:46,392 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34658, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:10:46,403 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:10:46,404 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:46,420 DEBUG [Listener at localhost/46655] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 06:10:46,424 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53912, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 06:10:46,439 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 06:10:46,439 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:46,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 06:10:46,446 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:10:46,447 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:46,447 DEBUG [Listener at localhost/46655] zookeeper.ReadOnlyZKClient(139): Connect 0x1792a4f4 to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:46,450 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:10:46,458 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 06:10:46,458 DEBUG [Listener at localhost/46655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22934466, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:46,459 INFO [Listener at localhost/46655] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54990 2023-07-24 06:10:46,466 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:10:46,467 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10195f3f3a2000a connected 2023-07-24 06:10:46,512 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=375, ProcessCount=177, AvailableMemoryMB=7093 2023-07-24 06:10:46,515 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-24 06:10:46,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:46,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:46,607 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 06:10:46,627 INFO [Listener at localhost/46655] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:10:46,627 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:46,628 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:46,628 INFO [Listener at localhost/46655] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:10:46,628 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:10:46,628 INFO [Listener at localhost/46655] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:10:46,628 INFO [Listener at localhost/46655] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:10:46,634 INFO [Listener at localhost/46655] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34793 2023-07-24 06:10:46,635 INFO [Listener at localhost/46655] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:10:46,636 DEBUG [Listener at localhost/46655] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:10:46,638 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:46,645 INFO [Listener at localhost/46655] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:10:46,650 INFO [Listener at localhost/46655] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34793 connecting to ZooKeeper ensemble=127.0.0.1:54990 2023-07-24 06:10:46,660 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:347930x0, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:10:46,667 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34793-0x10195f3f3a2000b connected 2023-07-24 06:10:46,670 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:10:46,671 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 06:10:46,672 DEBUG [Listener at localhost/46655] zookeeper.ZKUtil(164): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:10:46,674 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34793 2023-07-24 06:10:46,675 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34793 2023-07-24 06:10:46,676 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34793 2023-07-24 06:10:46,682 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34793 2023-07-24 06:10:46,683 DEBUG [Listener at localhost/46655] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34793 2023-07-24 06:10:46,686 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:10:46,686 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:10:46,686 INFO [Listener at localhost/46655] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:10:46,687 INFO [Listener at localhost/46655] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:10:46,687 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:10:46,687 INFO [Listener at localhost/46655] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:10:46,687 INFO [Listener at localhost/46655] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:10:46,688 INFO [Listener at localhost/46655] http.HttpServer(1146): Jetty bound to port 36883 2023-07-24 06:10:46,688 INFO [Listener at localhost/46655] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:10:46,696 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:46,696 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14ac5f55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:10:46,697 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:46,697 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34f7812e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:10:46,837 INFO [Listener at localhost/46655] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:10:46,839 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:10:46,840 INFO [Listener at localhost/46655] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:10:46,840 INFO [Listener at localhost/46655] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:10:46,843 INFO [Listener at localhost/46655] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:10:46,844 INFO [Listener at localhost/46655] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@289fa920{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/java.io.tmpdir/jetty-0_0_0_0-36883-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8891169814567917118/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:10:46,847 INFO [Listener at localhost/46655] server.AbstractConnector(333): Started ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:36883} 2023-07-24 06:10:46,847 INFO [Listener at localhost/46655] server.Server(415): Started @12448ms 2023-07-24 06:10:46,852 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(951): ClusterId : 507c294c-2fca-4595-bc60-3a33e5060e50 2023-07-24 06:10:46,853 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:10:46,856 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:10:46,856 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:10:46,858 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:10:46,864 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ReadOnlyZKClient(139): Connect 0x2dd2b676 to 127.0.0.1:54990 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:10:46,876 DEBUG [RS:3;jenkins-hbase4:34793] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23154dbc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:10:46,876 DEBUG [RS:3;jenkins-hbase4:34793] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@272add06, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:10:46,885 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34793 2023-07-24 06:10:46,885 INFO [RS:3;jenkins-hbase4:34793] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:10:46,885 INFO [RS:3;jenkins-hbase4:34793] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:10:46,885 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:10:46,887 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39303,1690179040397 with isa=jenkins-hbase4.apache.org/172.31.14.131:34793, startcode=1690179046626 2023-07-24 06:10:46,887 DEBUG [RS:3;jenkins-hbase4:34793] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:10:46,894 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43509, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:10:46,895 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39303] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,895 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:10:46,900 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:46,901 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:10:46,905 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50 2023-07-24 06:10:46,905 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41501 2023-07-24 06:10:46,905 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33633 2023-07-24 06:10:46,911 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39303,1690179040397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 06:10:46,911 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:10:46,912 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:10:46,911 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:10:46,912 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:10:46,913 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,913 WARN [RS:3;jenkins-hbase4:34793] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:10:46,913 INFO [RS:3;jenkins-hbase4:34793] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:10:46,913 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,913 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,914 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34793,1690179046626] 2023-07-24 06:10:46,914 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,914 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,914 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:46,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:46,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:46,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:46,918 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:46,919 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:46,939 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:46,939 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,940 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:46,941 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ZKUtil(162): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:46,943 DEBUG [RS:3;jenkins-hbase4:34793] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:10:46,943 INFO [RS:3;jenkins-hbase4:34793] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:10:46,957 INFO [RS:3;jenkins-hbase4:34793] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:10:46,959 INFO [RS:3;jenkins-hbase4:34793] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:10:46,959 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:46,959 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:10:46,963 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,963 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,964 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,964 DEBUG [RS:3;jenkins-hbase4:34793] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:10:46,971 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:46,971 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:46,971 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:46,987 INFO [RS:3;jenkins-hbase4:34793] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:10:46,987 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34793,1690179046626-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:10:46,998 INFO [RS:3;jenkins-hbase4:34793] regionserver.Replication(203): jenkins-hbase4.apache.org,34793,1690179046626 started 2023-07-24 06:10:46,998 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34793,1690179046626, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34793, sessionid=0x10195f3f3a2000b 2023-07-24 06:10:46,998 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:10:46,998 DEBUG [RS:3;jenkins-hbase4:34793] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:46,998 DEBUG [RS:3;jenkins-hbase4:34793] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34793,1690179046626' 2023-07-24 06:10:46,999 DEBUG [RS:3;jenkins-hbase4:34793] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:10:46,999 DEBUG [RS:3;jenkins-hbase4:34793] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:10:47,000 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:10:47,000 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:10:47,000 DEBUG [RS:3;jenkins-hbase4:34793] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:47,000 DEBUG [RS:3;jenkins-hbase4:34793] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34793,1690179046626' 2023-07-24 06:10:47,000 DEBUG [RS:3;jenkins-hbase4:34793] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:10:47,001 DEBUG [RS:3;jenkins-hbase4:34793] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:10:47,002 DEBUG [RS:3;jenkins-hbase4:34793] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:10:47,002 INFO [RS:3;jenkins-hbase4:34793] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:10:47,003 INFO [RS:3;jenkins-hbase4:34793] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:10:47,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:47,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:47,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:47,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:47,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:47,020 DEBUG [hconnection-0x63197ba-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:10:47,023 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34662, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:10:47,028 DEBUG [hconnection-0x63197ba-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:10:47,032 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:10:47,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:47,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:47,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:47,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:47,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:53912 deadline: 1690180247045, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:47,047 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:47,049 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:47,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:47,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:47,051 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:47,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:47,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:47,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:47,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:47,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:47,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:47,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:47,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:47,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:47,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:47,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:47,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:47,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:47,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:10:47,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to default 2023-07-24 06:10:47,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:47,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:47,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:47,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:47,106 INFO [RS:3;jenkins-hbase4:34793] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34793%2C1690179046626, suffix=, logDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,34793,1690179046626, archiveDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs, maxLogs=32 2023-07-24 06:10:47,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:47,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:47,122 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:10:47,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-24 06:10:47,131 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:47,132 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:47,134 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:47,136 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:47,145 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK] 2023-07-24 06:10:47,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:10:47,156 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:10:47,168 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK] 2023-07-24 06:10:47,171 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK] 2023-07-24 06:10:47,171 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,173 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,181 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:47,181 INFO [RS:3;jenkins-hbase4:34793] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,34793,1690179046626/jenkins-hbase4.apache.org%2C34793%2C1690179046626.1690179047107 2023-07-24 06:10:47,181 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 empty. 2023-07-24 06:10:47,185 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 empty. 2023-07-24 06:10:47,185 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,186 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,187 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,187 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 empty. 2023-07-24 06:10:47,188 DEBUG [RS:3;jenkins-hbase4:34793] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43363,DS-c6b24760-0e8e-4bab-a663-083e67e7e743,DISK], DatanodeInfoWithStorage[127.0.0.1:42505,DS-bc172f24-05df-4aac-85b6-4bdb55b9237c,DISK], DatanodeInfoWithStorage[127.0.0.1:36273,DS-6d613184-002e-4bc1-818d-19f01e921e96,DISK]] 2023-07-24 06:10:47,188 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a empty. 2023-07-24 06:10:47,189 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:47,190 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 empty. 2023-07-24 06:10:47,190 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,190 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,191 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,191 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 06:10:47,244 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:47,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:10:47,267 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 05c3edc2434b2bbeaeb332da7dc8e4c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:47,267 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 811edc04fbbb653e34e57c06c797b099, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:47,289 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f167fc25d19ff520e165f8adb30ba159, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:47,427 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 05c3edc2434b2bbeaeb332da7dc8e4c4, disabling compactions & flushes 2023-07-24 06:10:47,428 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. after waiting 0 ms 2023-07-24 06:10:47,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,428 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,429 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 05c3edc2434b2bbeaeb332da7dc8e4c4: 2023-07-24 06:10:47,429 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 16947d848131931f060504e8df5f0962, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:47,435 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,438 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 811edc04fbbb653e34e57c06c797b099, disabling compactions & flushes 2023-07-24 06:10:47,438 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,438 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,438 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. after waiting 0 ms 2023-07-24 06:10:47,439 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,439 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,439 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 811edc04fbbb653e34e57c06c797b099: 2023-07-24 06:10:47,439 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 901dcf1ed239ff6c92413b41f5045f8a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:47,445 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,447 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f167fc25d19ff520e165f8adb30ba159, disabling compactions & flushes 2023-07-24 06:10:47,447 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:47,447 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:47,447 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. after waiting 0 ms 2023-07-24 06:10:47,447 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:47,447 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:47,447 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f167fc25d19ff520e165f8adb30ba159: 2023-07-24 06:10:47,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:10:47,484 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,484 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 901dcf1ed239ff6c92413b41f5045f8a, disabling compactions & flushes 2023-07-24 06:10:47,484 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,485 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,485 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. after waiting 0 ms 2023-07-24 06:10:47,485 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,485 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,485 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 901dcf1ed239ff6c92413b41f5045f8a: 2023-07-24 06:10:47,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 16947d848131931f060504e8df5f0962, disabling compactions & flushes 2023-07-24 06:10:47,487 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. after waiting 0 ms 2023-07-24 06:10:47,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,487 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 16947d848131931f060504e8df5f0962: 2023-07-24 06:10:47,492 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:10:47,494 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179047493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179047493"}]},"ts":"1690179047493"} 2023-07-24 06:10:47,494 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179047493"}]},"ts":"1690179047493"} 2023-07-24 06:10:47,494 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179047493"}]},"ts":"1690179047493"} 2023-07-24 06:10:47,494 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179047493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179047493"}]},"ts":"1690179047493"} 2023-07-24 06:10:47,495 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179047493"}]},"ts":"1690179047493"} 2023-07-24 06:10:47,551 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 06:10:47,553 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:10:47,553 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179047553"}]},"ts":"1690179047553"} 2023-07-24 06:10:47,555 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 06:10:47,564 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:47,565 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:47,565 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:47,565 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:47,566 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, ASSIGN}] 2023-07-24 06:10:47,570 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, ASSIGN 2023-07-24 06:10:47,570 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, ASSIGN 2023-07-24 06:10:47,571 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, ASSIGN 2023-07-24 06:10:47,572 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, ASSIGN 2023-07-24 06:10:47,575 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:10:47,575 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:10:47,576 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:10:47,576 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:10:47,578 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, ASSIGN 2023-07-24 06:10:47,579 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:10:47,725 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 06:10:47,730 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:47,731 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179047730"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179047730"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179047730"}]},"ts":"1690179047730"} 2023-07-24 06:10:47,731 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:47,731 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:47,732 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047731"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179047731"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179047731"}]},"ts":"1690179047731"} 2023-07-24 06:10:47,732 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179047731"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179047731"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179047731"}]},"ts":"1690179047731"} 2023-07-24 06:10:47,732 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:47,732 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:47,732 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047732"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179047732"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179047732"}]},"ts":"1690179047732"} 2023-07-24 06:10:47,732 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047731"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179047731"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179047731"}]},"ts":"1690179047731"} 2023-07-24 06:10:47,736 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:47,740 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=16, state=RUNNABLE; OpenRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:47,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:47,747 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=14, state=RUNNABLE; OpenRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:47,749 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:47,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:10:47,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 16947d848131931f060504e8df5f0962, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 06:10:47,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 811edc04fbbb653e34e57c06c797b099, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 06:10:47,908 INFO [StoreOpener-16947d848131931f060504e8df5f0962-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,911 DEBUG [StoreOpener-16947d848131931f060504e8df5f0962-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/f 2023-07-24 06:10:47,911 DEBUG [StoreOpener-16947d848131931f060504e8df5f0962-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/f 2023-07-24 06:10:47,912 INFO [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,912 INFO [StoreOpener-16947d848131931f060504e8df5f0962-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 16947d848131931f060504e8df5f0962 columnFamilyName f 2023-07-24 06:10:47,913 INFO [StoreOpener-16947d848131931f060504e8df5f0962-1] regionserver.HStore(310): Store=16947d848131931f060504e8df5f0962/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:47,914 DEBUG [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/f 2023-07-24 06:10:47,915 DEBUG [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/f 2023-07-24 06:10:47,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,916 INFO [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 811edc04fbbb653e34e57c06c797b099 columnFamilyName f 2023-07-24 06:10:47,917 INFO [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] regionserver.HStore(310): Store=811edc04fbbb653e34e57c06c797b099/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:47,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:47,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 16947d848131931f060504e8df5f0962 2023-07-24 06:10:47,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:47,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 811edc04fbbb653e34e57c06c797b099; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11250847200, jitterRate=0.04781679809093475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:47,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 811edc04fbbb653e34e57c06c797b099: 2023-07-24 06:10:47,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099., pid=21, masterSystemTime=1690179047896 2023-07-24 06:10:47,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:47,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:47,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 901dcf1ed239ff6c92413b41f5045f8a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 06:10:47,934 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:47,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 16947d848131931f060504e8df5f0962; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10809873280, jitterRate=0.006747901439666748}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:47,935 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047934"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179047934"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179047934"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179047934"}]},"ts":"1690179047934"} 2023-07-24 06:10:47,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 16947d848131931f060504e8df5f0962: 2023-07-24 06:10:47,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962., pid=19, masterSystemTime=1690179047897 2023-07-24 06:10:47,939 INFO [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:47,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05c3edc2434b2bbeaeb332da7dc8e4c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 06:10:47,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:47,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,942 DEBUG [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/f 2023-07-24 06:10:47,942 DEBUG [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/f 2023-07-24 06:10:47,942 INFO [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 901dcf1ed239ff6c92413b41f5045f8a columnFamilyName f 2023-07-24 06:10:47,943 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:47,943 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179047942"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179047942"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179047942"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179047942"}]},"ts":"1690179047942"} 2023-07-24 06:10:47,943 INFO [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] regionserver.HStore(310): Store=901dcf1ed239ff6c92413b41f5045f8a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:47,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=14 2023-07-24 06:10:47,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=14, state=SUCCESS; OpenRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,40449,1690179042726 in 192 msec 2023-07-24 06:10:47,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,951 INFO [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,958 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, ASSIGN in 385 msec 2023-07-24 06:10:47,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=16 2023-07-24 06:10:47,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=16, state=SUCCESS; OpenRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,38203,1690179042473 in 214 msec 2023-07-24 06:10:47,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:47,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, ASSIGN in 393 msec 2023-07-24 06:10:47,962 DEBUG [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/f 2023-07-24 06:10:47,962 DEBUG [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/f 2023-07-24 06:10:47,963 INFO [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05c3edc2434b2bbeaeb332da7dc8e4c4 columnFamilyName f 2023-07-24 06:10:47,964 INFO [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] regionserver.HStore(310): Store=05c3edc2434b2bbeaeb332da7dc8e4c4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:47,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:47,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 901dcf1ed239ff6c92413b41f5045f8a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10751924320, jitterRate=0.00135098397731781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:47,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 901dcf1ed239ff6c92413b41f5045f8a: 2023-07-24 06:10:47,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a., pid=18, masterSystemTime=1690179047896 2023-07-24 06:10:47,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,978 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:47,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:47,979 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:47,979 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179047978"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179047978"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179047978"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179047978"}]},"ts":"1690179047978"} 2023-07-24 06:10:47,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:47,993 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05c3edc2434b2bbeaeb332da7dc8e4c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10675674080, jitterRate=-0.005750373005867004}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:47,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05c3edc2434b2bbeaeb332da7dc8e4c4: 2023-07-24 06:10:47,995 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 06:10:47,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,40449,1690179042726 in 252 msec 2023-07-24 06:10:47,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4., pid=20, masterSystemTime=1690179047897 2023-07-24 06:10:47,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:47,999 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, ASSIGN in 430 msec 2023-07-24 06:10:47,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f167fc25d19ff520e165f8adb30ba159, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 06:10:48,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:48,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,003 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:48,003 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048002"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048002"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048002"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048002"}]},"ts":"1690179048002"} 2023-07-24 06:10:48,007 INFO [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,010 DEBUG [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/f 2023-07-24 06:10:48,010 DEBUG [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/f 2023-07-24 06:10:48,011 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-24 06:10:48,012 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,38203,1690179042473 in 262 msec 2023-07-24 06:10:48,013 INFO [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f167fc25d19ff520e165f8adb30ba159 columnFamilyName f 2023-07-24 06:10:48,014 INFO [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] regionserver.HStore(310): Store=f167fc25d19ff520e165f8adb30ba159/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:48,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, ASSIGN in 447 msec 2023-07-24 06:10:48,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:48,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f167fc25d19ff520e165f8adb30ba159; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9732842880, jitterRate=-0.09355837106704712}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:48,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f167fc25d19ff520e165f8adb30ba159: 2023-07-24 06:10:48,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159., pid=22, masterSystemTime=1690179047897 2023-07-24 06:10:48,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,038 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,043 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:48,043 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048042"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048042"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048042"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048042"}]},"ts":"1690179048042"} 2023-07-24 06:10:48,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-24 06:10:48,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,38203,1690179042473 in 297 msec 2023-07-24 06:10:48,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-24 06:10:48,056 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, ASSIGN in 485 msec 2023-07-24 06:10:48,057 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:10:48,058 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179048057"}]},"ts":"1690179048057"} 2023-07-24 06:10:48,060 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 06:10:48,064 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:10:48,066 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 947 msec 2023-07-24 06:10:48,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:10:48,271 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-24 06:10:48,271 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-24 06:10:48,272 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:48,279 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-24 06:10:48,279 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:48,280 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-24 06:10:48,280 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:48,285 DEBUG [Listener at localhost/46655] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:48,292 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56844, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:48,295 DEBUG [Listener at localhost/46655] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:48,298 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45292, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:48,299 DEBUG [Listener at localhost/46655] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:48,304 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56394, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:48,307 DEBUG [Listener at localhost/46655] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:48,310 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34664, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:48,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:48,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:10:48,325 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:48,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:48,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:48,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 05c3edc2434b2bbeaeb332da7dc8e4c4 to RSGroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:48,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:48,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:48,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:48,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:48,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, REOPEN/MOVE 2023-07-24 06:10:48,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 811edc04fbbb653e34e57c06c797b099 to RSGroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:48,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:48,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:48,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:48,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:48,351 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, REOPEN/MOVE 2023-07-24 06:10:48,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, REOPEN/MOVE 2023-07-24 06:10:48,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region f167fc25d19ff520e165f8adb30ba159 to RSGroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,353 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, REOPEN/MOVE 2023-07-24 06:10:48,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:48,354 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:48,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:48,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:48,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:48,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:48,354 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048354"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048354"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048354"}]},"ts":"1690179048354"} 2023-07-24 06:10:48,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, REOPEN/MOVE 2023-07-24 06:10:48,355 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:48,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 16947d848131931f060504e8df5f0962 to RSGroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,356 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048355"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048355"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048355"}]},"ts":"1690179048355"} 2023-07-24 06:10:48,356 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, REOPEN/MOVE 2023-07-24 06:10:48,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:48,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:48,358 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:48,358 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:48,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:48,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:48,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:48,359 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048358"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048358"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048358"}]},"ts":"1690179048358"} 2023-07-24 06:10:48,359 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:48,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=25, state=RUNNABLE; CloseRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:48,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, REOPEN/MOVE 2023-07-24 06:10:48,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 901dcf1ed239ff6c92413b41f5045f8a to RSGroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:48,365 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, REOPEN/MOVE 2023-07-24 06:10:48,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:48,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:48,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:48,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:48,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:48,368 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:48,368 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048368"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048368"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048368"}]},"ts":"1690179048368"} 2023-07-24 06:10:48,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, REOPEN/MOVE 2023-07-24 06:10:48,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1909395056, current retry=0 2023-07-24 06:10:48,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:48,373 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, REOPEN/MOVE 2023-07-24 06:10:48,375 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:48,375 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048375"}]},"ts":"1690179048375"} 2023-07-24 06:10:48,379 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=30, state=RUNNABLE; CloseRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:48,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 16947d848131931f060504e8df5f0962, disabling compactions & flushes 2023-07-24 06:10:48,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 901dcf1ed239ff6c92413b41f5045f8a, disabling compactions & flushes 2023-07-24 06:10:48,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. after waiting 0 ms 2023-07-24 06:10:48,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. after waiting 0 ms 2023-07-24 06:10:48,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:48,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 16947d848131931f060504e8df5f0962: 2023-07-24 06:10:48,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 16947d848131931f060504e8df5f0962 move to jenkins-hbase4.apache.org,34793,1690179046626 record at close sequenceid=2 2023-07-24 06:10:48,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:48,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 901dcf1ed239ff6c92413b41f5045f8a: 2023-07-24 06:10:48,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 901dcf1ed239ff6c92413b41f5045f8a move to jenkins-hbase4.apache.org,34793,1690179046626 record at close sequenceid=2 2023-07-24 06:10:48,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05c3edc2434b2bbeaeb332da7dc8e4c4, disabling compactions & flushes 2023-07-24 06:10:48,542 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. after waiting 0 ms 2023-07-24 06:10:48,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,548 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=CLOSED 2023-07-24 06:10:48,548 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048548"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179048548"}]},"ts":"1690179048548"} 2023-07-24 06:10:48,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 811edc04fbbb653e34e57c06c797b099, disabling compactions & flushes 2023-07-24 06:10:48,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. after waiting 0 ms 2023-07-24 06:10:48,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,554 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=CLOSED 2023-07-24 06:10:48,554 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048554"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179048554"}]},"ts":"1690179048554"} 2023-07-24 06:10:48,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-24 06:10:48,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,38203,1690179042473 in 180 msec 2023-07-24 06:10:48,563 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:48,565 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=30 2023-07-24 06:10:48,565 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=30, state=SUCCESS; CloseRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,40449,1690179042726 in 180 msec 2023-07-24 06:10:48,566 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:48,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:48,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:48,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05c3edc2434b2bbeaeb332da7dc8e4c4: 2023-07-24 06:10:48,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 811edc04fbbb653e34e57c06c797b099: 2023-07-24 06:10:48,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 05c3edc2434b2bbeaeb332da7dc8e4c4 move to jenkins-hbase4.apache.org,37173,1690179042942 record at close sequenceid=2 2023-07-24 06:10:48,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 811edc04fbbb653e34e57c06c797b099 move to jenkins-hbase4.apache.org,37173,1690179042942 record at close sequenceid=2 2023-07-24 06:10:48,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,575 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=CLOSED 2023-07-24 06:10:48,577 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048575"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179048575"}]},"ts":"1690179048575"} 2023-07-24 06:10:48,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f167fc25d19ff520e165f8adb30ba159, disabling compactions & flushes 2023-07-24 06:10:48,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. after waiting 0 ms 2023-07-24 06:10:48,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,580 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=CLOSED 2023-07-24 06:10:48,580 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048580"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179048580"}]},"ts":"1690179048580"} 2023-07-24 06:10:48,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-24 06:10:48,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,40449,1690179042726 in 221 msec 2023-07-24 06:10:48,586 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-24 06:10:48,586 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,38203,1690179042473 in 224 msec 2023-07-24 06:10:48,586 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:48,587 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:48,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:48,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f167fc25d19ff520e165f8adb30ba159: 2023-07-24 06:10:48,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f167fc25d19ff520e165f8adb30ba159 move to jenkins-hbase4.apache.org,37173,1690179042942 record at close sequenceid=2 2023-07-24 06:10:48,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,604 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=CLOSED 2023-07-24 06:10:48,604 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048604"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179048604"}]},"ts":"1690179048604"} 2023-07-24 06:10:48,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=25 2023-07-24 06:10:48,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=25, state=SUCCESS; CloseRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,38203,1690179042473 in 245 msec 2023-07-24 06:10:48,615 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:48,714 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 06:10:48,715 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,715 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:48,715 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:48,715 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,715 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048715"}]},"ts":"1690179048715"} 2023-07-24 06:10:48,715 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,715 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048715"}]},"ts":"1690179048715"} 2023-07-24 06:10:48,715 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048715"}]},"ts":"1690179048715"} 2023-07-24 06:10:48,715 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048714"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048714"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048714"}]},"ts":"1690179048714"} 2023-07-24 06:10:48,715 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048715"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179048715"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179048715"}]},"ts":"1690179048715"} 2023-07-24 06:10:48,718 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=27, state=RUNNABLE; OpenRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:48,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=24, state=RUNNABLE; OpenRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:48,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=30, state=RUNNABLE; OpenRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:48,722 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=25, state=RUNNABLE; OpenRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:48,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=23, state=RUNNABLE; OpenRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:48,870 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:48,870 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:48,872 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:48,873 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,873 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:10:48,877 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45300, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:10:48,884 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 901dcf1ed239ff6c92413b41f5045f8a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 06:10:48,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:48,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05c3edc2434b2bbeaeb332da7dc8e4c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 06:10:48,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,894 INFO [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,894 INFO [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,896 DEBUG [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/f 2023-07-24 06:10:48,896 DEBUG [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/f 2023-07-24 06:10:48,896 INFO [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 901dcf1ed239ff6c92413b41f5045f8a columnFamilyName f 2023-07-24 06:10:48,896 DEBUG [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/f 2023-07-24 06:10:48,897 DEBUG [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/f 2023-07-24 06:10:48,897 INFO [StoreOpener-901dcf1ed239ff6c92413b41f5045f8a-1] regionserver.HStore(310): Store=901dcf1ed239ff6c92413b41f5045f8a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:48,901 INFO [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05c3edc2434b2bbeaeb332da7dc8e4c4 columnFamilyName f 2023-07-24 06:10:48,902 INFO [StoreOpener-05c3edc2434b2bbeaeb332da7dc8e4c4-1] regionserver.HStore(310): Store=05c3edc2434b2bbeaeb332da7dc8e4c4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:48,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:48,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:48,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 901dcf1ed239ff6c92413b41f5045f8a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11075600800, jitterRate=0.031495705246925354}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:48,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 901dcf1ed239ff6c92413b41f5045f8a: 2023-07-24 06:10:48,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05c3edc2434b2bbeaeb332da7dc8e4c4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11634852480, jitterRate=0.08358007669448853}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:48,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05c3edc2434b2bbeaeb332da7dc8e4c4: 2023-07-24 06:10:48,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4., pid=37, masterSystemTime=1690179048873 2023-07-24 06:10:48,920 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a., pid=35, masterSystemTime=1690179048870 2023-07-24 06:10:48,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:48,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 811edc04fbbb653e34e57c06c797b099, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 06:10:48,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,928 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,929 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048928"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048928"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048928"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048928"}]},"ts":"1690179048928"} 2023-07-24 06:10:48,934 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:48,934 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179048933"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048933"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048933"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048933"}]},"ts":"1690179048933"} 2023-07-24 06:10:48,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,941 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=23 2023-07-24 06:10:48,944 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=23, state=SUCCESS; OpenRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,37173,1690179042942 in 207 msec 2023-07-24 06:10:48,946 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=30 2023-07-24 06:10:48,946 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=30, state=SUCCESS; OpenRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,34793,1690179046626 in 219 msec 2023-07-24 06:10:48,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:48,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,941 INFO [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,946 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, REOPEN/MOVE in 595 msec 2023-07-24 06:10:48,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 16947d848131931f060504e8df5f0962, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 06:10:48,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:48,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, REOPEN/MOVE in 580 msec 2023-07-24 06:10:48,950 DEBUG [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/f 2023-07-24 06:10:48,950 DEBUG [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/f 2023-07-24 06:10:48,950 INFO [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 811edc04fbbb653e34e57c06c797b099 columnFamilyName f 2023-07-24 06:10:48,951 INFO [StoreOpener-811edc04fbbb653e34e57c06c797b099-1] regionserver.HStore(310): Store=811edc04fbbb653e34e57c06c797b099/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:48,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:48,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 811edc04fbbb653e34e57c06c797b099; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10758082880, jitterRate=0.0019245445728302002}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:48,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 811edc04fbbb653e34e57c06c797b099: 2023-07-24 06:10:48,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099., pid=34, masterSystemTime=1690179048873 2023-07-24 06:10:48,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:48,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f167fc25d19ff520e165f8adb30ba159, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 06:10:48,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,967 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:48,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,967 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048967"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048967"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048967"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048967"}]},"ts":"1690179048967"} 2023-07-24 06:10:48,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,968 INFO [StoreOpener-16947d848131931f060504e8df5f0962-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,971 DEBUG [StoreOpener-16947d848131931f060504e8df5f0962-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/f 2023-07-24 06:10:48,971 DEBUG [StoreOpener-16947d848131931f060504e8df5f0962-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/f 2023-07-24 06:10:48,973 INFO [StoreOpener-16947d848131931f060504e8df5f0962-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 16947d848131931f060504e8df5f0962 columnFamilyName f 2023-07-24 06:10:48,973 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=24 2023-07-24 06:10:48,973 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=24, state=SUCCESS; OpenRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,37173,1690179042942 in 251 msec 2023-07-24 06:10:48,974 INFO [StoreOpener-16947d848131931f060504e8df5f0962-1] regionserver.HStore(310): Store=16947d848131931f060504e8df5f0962/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:48,975 INFO [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,975 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, REOPEN/MOVE in 624 msec 2023-07-24 06:10:48,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,976 DEBUG [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/f 2023-07-24 06:10:48,976 DEBUG [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/f 2023-07-24 06:10:48,977 INFO [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f167fc25d19ff520e165f8adb30ba159 columnFamilyName f 2023-07-24 06:10:48,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,978 INFO [StoreOpener-f167fc25d19ff520e165f8adb30ba159-1] regionserver.HStore(310): Store=f167fc25d19ff520e165f8adb30ba159/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:48,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 16947d848131931f060504e8df5f0962 2023-07-24 06:10:48,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 16947d848131931f060504e8df5f0962; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10332841120, jitterRate=-0.03767918050289154}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:48,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 16947d848131931f060504e8df5f0962: 2023-07-24 06:10:48,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962., pid=33, masterSystemTime=1690179048870 2023-07-24 06:10:48,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:48,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f167fc25d19ff520e165f8adb30ba159; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9763091360, jitterRate=-0.09074126183986664}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:48,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f167fc25d19ff520e165f8adb30ba159: 2023-07-24 06:10:48,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:48,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159., pid=36, masterSystemTime=1690179048873 2023-07-24 06:10:48,991 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:48,991 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048990"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048990"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048990"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048990"}]},"ts":"1690179048990"} 2023-07-24 06:10:48,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:48,995 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:48,995 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179048995"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179048995"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179048995"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179048995"}]},"ts":"1690179048995"} 2023-07-24 06:10:49,002 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=27 2023-07-24 06:10:49,002 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=27, state=SUCCESS; OpenRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,34793,1690179046626 in 275 msec 2023-07-24 06:10:49,007 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, REOPEN/MOVE in 643 msec 2023-07-24 06:10:49,015 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=25 2023-07-24 06:10:49,015 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=25, state=SUCCESS; OpenRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,37173,1690179042942 in 276 msec 2023-07-24 06:10:49,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, REOPEN/MOVE in 661 msec 2023-07-24 06:10:49,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-24 06:10:49,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1909395056. 2023-07-24 06:10:49,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:49,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:49,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:49,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:49,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:10:49,385 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:49,393 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:49,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:49,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:49,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 06:10:49,427 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179049426"}]},"ts":"1690179049426"} 2023-07-24 06:10:49,429 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 06:10:49,431 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 06:10:49,433 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, UNASSIGN}] 2023-07-24 06:10:49,437 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, UNASSIGN 2023-07-24 06:10:49,437 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, UNASSIGN 2023-07-24 06:10:49,438 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, UNASSIGN 2023-07-24 06:10:49,438 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, UNASSIGN 2023-07-24 06:10:49,439 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, UNASSIGN 2023-07-24 06:10:49,439 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:49,440 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179049439"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179049439"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179049439"}]},"ts":"1690179049439"} 2023-07-24 06:10:49,441 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:49,441 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:49,441 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179049441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179049441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179049441"}]},"ts":"1690179049441"} 2023-07-24 06:10:49,441 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179049441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179049441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179049441"}]},"ts":"1690179049441"} 2023-07-24 06:10:49,441 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:49,442 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179049441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179049441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179049441"}]},"ts":"1690179049441"} 2023-07-24 06:10:49,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:49,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179049442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179049442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179049442"}]},"ts":"1690179049442"} 2023-07-24 06:10:49,453 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=40, state=RUNNABLE; CloseRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:49,455 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=43, state=RUNNABLE; CloseRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:49,457 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:49,460 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:49,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=41, state=RUNNABLE; CloseRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:49,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 06:10:49,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:49,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f167fc25d19ff520e165f8adb30ba159, disabling compactions & flushes 2023-07-24 06:10:49,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 16947d848131931f060504e8df5f0962 2023-07-24 06:10:49,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:49,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:49,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. after waiting 0 ms 2023-07-24 06:10:49,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:49,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 16947d848131931f060504e8df5f0962, disabling compactions & flushes 2023-07-24 06:10:49,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:49,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:49,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. after waiting 0 ms 2023-07-24 06:10:49,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:49,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:10:49,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:10:49,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159. 2023-07-24 06:10:49,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962. 2023-07-24 06:10:49,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f167fc25d19ff520e165f8adb30ba159: 2023-07-24 06:10:49,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 16947d848131931f060504e8df5f0962: 2023-07-24 06:10:49,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 16947d848131931f060504e8df5f0962 2023-07-24 06:10:49,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:49,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 901dcf1ed239ff6c92413b41f5045f8a, disabling compactions & flushes 2023-07-24 06:10:49,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:49,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:49,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. after waiting 0 ms 2023-07-24 06:10:49,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:49,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:10:49,648 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=16947d848131931f060504e8df5f0962, regionState=CLOSED 2023-07-24 06:10:49,648 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179049648"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179049648"}]},"ts":"1690179049648"} 2023-07-24 06:10:49,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a. 2023-07-24 06:10:49,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 901dcf1ed239ff6c92413b41f5045f8a: 2023-07-24 06:10:49,650 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=f167fc25d19ff520e165f8adb30ba159, regionState=CLOSED 2023-07-24 06:10:49,650 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179049650"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179049650"}]},"ts":"1690179049650"} 2023-07-24 06:10:49,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:49,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05c3edc2434b2bbeaeb332da7dc8e4c4, disabling compactions & flushes 2023-07-24 06:10:49,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. after waiting 0 ms 2023-07-24 06:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:49,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:49,677 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=901dcf1ed239ff6c92413b41f5045f8a, regionState=CLOSED 2023-07-24 06:10:49,677 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179049677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179049677"}]},"ts":"1690179049677"} 2023-07-24 06:10:49,682 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-24 06:10:49,682 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 16947d848131931f060504e8df5f0962, server=jenkins-hbase4.apache.org,34793,1690179046626 in 202 msec 2023-07-24 06:10:49,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:10:49,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=41 2023-07-24 06:10:49,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=41, state=SUCCESS; CloseRegionProcedure f167fc25d19ff520e165f8adb30ba159, server=jenkins-hbase4.apache.org,37173,1690179042942 in 216 msec 2023-07-24 06:10:49,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=16947d848131931f060504e8df5f0962, UNASSIGN in 249 msec 2023-07-24 06:10:49,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4. 2023-07-24 06:10:49,685 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05c3edc2434b2bbeaeb332da7dc8e4c4: 2023-07-24 06:10:49,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=43 2023-07-24 06:10:49,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=43, state=SUCCESS; CloseRegionProcedure 901dcf1ed239ff6c92413b41f5045f8a, server=jenkins-hbase4.apache.org,34793,1690179046626 in 226 msec 2023-07-24 06:10:49,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f167fc25d19ff520e165f8adb30ba159, UNASSIGN in 251 msec 2023-07-24 06:10:49,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:49,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:49,689 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=901dcf1ed239ff6c92413b41f5045f8a, UNASSIGN in 253 msec 2023-07-24 06:10:49,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 811edc04fbbb653e34e57c06c797b099, disabling compactions & flushes 2023-07-24 06:10:49,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:49,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:49,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. after waiting 0 ms 2023-07-24 06:10:49,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:49,692 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=05c3edc2434b2bbeaeb332da7dc8e4c4, regionState=CLOSED 2023-07-24 06:10:49,692 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179049692"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179049692"}]},"ts":"1690179049692"} 2023-07-24 06:10:49,701 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-24 06:10:49,701 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure 05c3edc2434b2bbeaeb332da7dc8e4c4, server=jenkins-hbase4.apache.org,37173,1690179042942 in 238 msec 2023-07-24 06:10:49,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:10:49,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05c3edc2434b2bbeaeb332da7dc8e4c4, UNASSIGN in 268 msec 2023-07-24 06:10:49,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099. 2023-07-24 06:10:49,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 811edc04fbbb653e34e57c06c797b099: 2023-07-24 06:10:49,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:49,711 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=811edc04fbbb653e34e57c06c797b099, regionState=CLOSED 2023-07-24 06:10:49,711 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179049711"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179049711"}]},"ts":"1690179049711"} 2023-07-24 06:10:49,716 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=40 2023-07-24 06:10:49,716 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=40, state=SUCCESS; CloseRegionProcedure 811edc04fbbb653e34e57c06c797b099, server=jenkins-hbase4.apache.org,37173,1690179042942 in 261 msec 2023-07-24 06:10:49,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=38 2023-07-24 06:10:49,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=811edc04fbbb653e34e57c06c797b099, UNASSIGN in 283 msec 2023-07-24 06:10:49,720 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179049720"}]},"ts":"1690179049720"} 2023-07-24 06:10:49,722 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 06:10:49,724 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 06:10:49,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 324 msec 2023-07-24 06:10:49,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 06:10:49,729 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-24 06:10:49,730 INFO [Listener at localhost/46655] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:49,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:49,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-24 06:10:49,752 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-24 06:10:49,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 06:10:49,772 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:49,772 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:49,772 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:49,772 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:49,772 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:49,779 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/recovered.edits] 2023-07-24 06:10:49,786 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/recovered.edits] 2023-07-24 06:10:49,786 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/recovered.edits] 2023-07-24 06:10:49,787 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/recovered.edits] 2023-07-24 06:10:49,787 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/recovered.edits] 2023-07-24 06:10:49,810 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159/recovered.edits/7.seqid 2023-07-24 06:10:49,813 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f167fc25d19ff520e165f8adb30ba159 2023-07-24 06:10:49,816 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099/recovered.edits/7.seqid 2023-07-24 06:10:49,817 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/811edc04fbbb653e34e57c06c797b099 2023-07-24 06:10:49,818 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962/recovered.edits/7.seqid 2023-07-24 06:10:49,818 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a/recovered.edits/7.seqid 2023-07-24 06:10:49,819 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/16947d848131931f060504e8df5f0962 2023-07-24 06:10:49,821 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4/recovered.edits/7.seqid 2023-07-24 06:10:49,821 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/901dcf1ed239ff6c92413b41f5045f8a 2023-07-24 06:10:49,823 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05c3edc2434b2bbeaeb332da7dc8e4c4 2023-07-24 06:10:49,823 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 06:10:49,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 06:10:49,865 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 06:10:49,872 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 06:10:49,875 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 06:10:49,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179049875"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:49,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179049875"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:49,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179049875"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:49,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179047110.16947d848131931f060504e8df5f0962.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179049875"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:49,875 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179049875"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:49,885 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 06:10:49,886 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 05c3edc2434b2bbeaeb332da7dc8e4c4, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179047110.05c3edc2434b2bbeaeb332da7dc8e4c4.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 811edc04fbbb653e34e57c06c797b099, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179047110.811edc04fbbb653e34e57c06c797b099.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f167fc25d19ff520e165f8adb30ba159, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179047110.f167fc25d19ff520e165f8adb30ba159.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 16947d848131931f060504e8df5f0962, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179047110.16947d848131931f060504e8df5f0962.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 901dcf1ed239ff6c92413b41f5045f8a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179047110.901dcf1ed239ff6c92413b41f5045f8a.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 06:10:49,886 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 06:10:49,886 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179049886"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:49,889 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 06:10:49,900 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:49,900 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 2023-07-24 06:10:49,900 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:49,900 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:49,900 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:49,901 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 empty. 2023-07-24 06:10:49,901 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 empty. 2023-07-24 06:10:49,901 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb empty. 2023-07-24 06:10:49,901 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 empty. 2023-07-24 06:10:49,902 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:49,902 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c empty. 2023-07-24 06:10:49,902 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:49,902 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:49,902 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 2023-07-24 06:10:49,903 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:49,903 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 06:10:49,942 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:49,950 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => a090cc34aed3da6a81e8e13f552fca5c, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:49,950 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1b272067292f4e4c06ae34ffe20be5a8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:49,951 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 2760c00ff18916a86d55ad84db67edbb, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:50,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 06:10:50,106 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,106 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1b272067292f4e4c06ae34ffe20be5a8, disabling compactions & flushes 2023-07-24 06:10:50,106 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,106 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,107 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. after waiting 0 ms 2023-07-24 06:10:50,107 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,107 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,107 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1b272067292f4e4c06ae34ffe20be5a8: 2023-07-24 06:10:50,107 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2919f6f5516fb40a66dc60bccfa6ead1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:50,109 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,109 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 2760c00ff18916a86d55ad84db67edbb, disabling compactions & flushes 2023-07-24 06:10:50,109 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,109 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,109 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. after waiting 0 ms 2023-07-24 06:10:50,109 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,109 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,109 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 2760c00ff18916a86d55ad84db67edbb: 2023-07-24 06:10:50,110 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => affbcac39e46e9e55089a77148675000, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:50,187 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,187 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2919f6f5516fb40a66dc60bccfa6ead1, disabling compactions & flushes 2023-07-24 06:10:50,187 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,188 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,188 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. after waiting 0 ms 2023-07-24 06:10:50,188 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,188 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,188 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2919f6f5516fb40a66dc60bccfa6ead1: 2023-07-24 06:10:50,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 06:10:50,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing a090cc34aed3da6a81e8e13f552fca5c, disabling compactions & flushes 2023-07-24 06:10:50,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:50,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:50,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. after waiting 0 ms 2023-07-24 06:10:50,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:50,509 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:50,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for a090cc34aed3da6a81e8e13f552fca5c: 2023-07-24 06:10:50,553 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,554 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing affbcac39e46e9e55089a77148675000, disabling compactions & flushes 2023-07-24 06:10:50,554 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:50,554 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:50,554 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. after waiting 0 ms 2023-07-24 06:10:50,554 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:50,554 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:50,554 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for affbcac39e46e9e55089a77148675000: 2023-07-24 06:10:50,563 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179050563"}]},"ts":"1690179050563"} 2023-07-24 06:10:50,563 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179050563"}]},"ts":"1690179050563"} 2023-07-24 06:10:50,564 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179050563"}]},"ts":"1690179050563"} 2023-07-24 06:10:50,564 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179050563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179050563"}]},"ts":"1690179050563"} 2023-07-24 06:10:50,564 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179050563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179050563"}]},"ts":"1690179050563"} 2023-07-24 06:10:50,573 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 06:10:50,574 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179050574"}]},"ts":"1690179050574"} 2023-07-24 06:10:50,581 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 06:10:50,586 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:50,587 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:50,587 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:50,587 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:50,591 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, ASSIGN}] 2023-07-24 06:10:50,595 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, ASSIGN 2023-07-24 06:10:50,595 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, ASSIGN 2023-07-24 06:10:50,597 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:50,598 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:50,599 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, ASSIGN 2023-07-24 06:10:50,599 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, ASSIGN 2023-07-24 06:10:50,599 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, ASSIGN 2023-07-24 06:10:50,600 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:50,601 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:50,604 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:50,747 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 06:10:50,751 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=2760c00ff18916a86d55ad84db67edbb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:50,751 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=2919f6f5516fb40a66dc60bccfa6ead1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:50,751 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=1b272067292f4e4c06ae34ffe20be5a8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:50,752 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179050751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179050751"}]},"ts":"1690179050751"} 2023-07-24 06:10:50,751 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=a090cc34aed3da6a81e8e13f552fca5c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:50,751 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=affbcac39e46e9e55089a77148675000, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:50,752 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179050751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179050751"}]},"ts":"1690179050751"} 2023-07-24 06:10:50,752 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179050751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179050751"}]},"ts":"1690179050751"} 2023-07-24 06:10:50,752 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179050751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179050751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179050751"}]},"ts":"1690179050751"} 2023-07-24 06:10:50,752 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179050751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179050751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179050751"}]},"ts":"1690179050751"} 2023-07-24 06:10:50,756 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=53, state=RUNNABLE; OpenRegionProcedure 2919f6f5516fb40a66dc60bccfa6ead1, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:50,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=54, state=RUNNABLE; OpenRegionProcedure affbcac39e46e9e55089a77148675000, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:50,761 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; OpenRegionProcedure 2760c00ff18916a86d55ad84db67edbb, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:50,765 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=51, state=RUNNABLE; OpenRegionProcedure 1b272067292f4e4c06ae34ffe20be5a8, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:50,767 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=50, state=RUNNABLE; OpenRegionProcedure a090cc34aed3da6a81e8e13f552fca5c, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:50,775 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 06:10:50,860 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 06:10:50,861 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 06:10:50,861 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:10:50,861 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 06:10:50,861 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 06:10:50,861 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 06:10:50,865 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 06:10:50,866 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 06:10:50,867 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 06:10:50,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 06:10:50,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2919f6f5516fb40a66dc60bccfa6ead1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 06:10:50,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,924 INFO [StoreOpener-2919f6f5516fb40a66dc60bccfa6ead1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2760c00ff18916a86d55ad84db67edbb, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 06:10:50,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,928 DEBUG [StoreOpener-2919f6f5516fb40a66dc60bccfa6ead1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/f 2023-07-24 06:10:50,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,928 DEBUG [StoreOpener-2919f6f5516fb40a66dc60bccfa6ead1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/f 2023-07-24 06:10:50,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,929 INFO [StoreOpener-2919f6f5516fb40a66dc60bccfa6ead1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2919f6f5516fb40a66dc60bccfa6ead1 columnFamilyName f 2023-07-24 06:10:50,930 INFO [StoreOpener-2919f6f5516fb40a66dc60bccfa6ead1-1] regionserver.HStore(310): Store=2919f6f5516fb40a66dc60bccfa6ead1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:50,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,932 INFO [StoreOpener-2760c00ff18916a86d55ad84db67edbb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,935 DEBUG [StoreOpener-2760c00ff18916a86d55ad84db67edbb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/f 2023-07-24 06:10:50,935 DEBUG [StoreOpener-2760c00ff18916a86d55ad84db67edbb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/f 2023-07-24 06:10:50,936 INFO [StoreOpener-2760c00ff18916a86d55ad84db67edbb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2760c00ff18916a86d55ad84db67edbb columnFamilyName f 2023-07-24 06:10:50,937 INFO [StoreOpener-2760c00ff18916a86d55ad84db67edbb-1] regionserver.HStore(310): Store=2760c00ff18916a86d55ad84db67edbb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:50,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:50,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:50,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2919f6f5516fb40a66dc60bccfa6ead1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10898974240, jitterRate=0.015046074986457825}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:50,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2919f6f5516fb40a66dc60bccfa6ead1: 2023-07-24 06:10:50,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1., pid=55, masterSystemTime=1690179050911 2023-07-24 06:10:50,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:50,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:50,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b272067292f4e4c06ae34ffe20be5a8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 06:10:50,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,956 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=2919f6f5516fb40a66dc60bccfa6ead1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:50,956 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050956"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179050956"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179050956"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179050956"}]},"ts":"1690179050956"} 2023-07-24 06:10:50,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:50,961 INFO [StoreOpener-1b272067292f4e4c06ae34ffe20be5a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2760c00ff18916a86d55ad84db67edbb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10732431840, jitterRate=-4.643946886062622E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:50,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2760c00ff18916a86d55ad84db67edbb: 2023-07-24 06:10:50,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=53 2023-07-24 06:10:50,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=53, state=SUCCESS; OpenRegionProcedure 2919f6f5516fb40a66dc60bccfa6ead1, server=jenkins-hbase4.apache.org,34793,1690179046626 in 203 msec 2023-07-24 06:10:50,968 DEBUG [StoreOpener-1b272067292f4e4c06ae34ffe20be5a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/f 2023-07-24 06:10:50,968 DEBUG [StoreOpener-1b272067292f4e4c06ae34ffe20be5a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/f 2023-07-24 06:10:50,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb., pid=57, masterSystemTime=1690179050918 2023-07-24 06:10:50,969 INFO [StoreOpener-1b272067292f4e4c06ae34ffe20be5a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b272067292f4e4c06ae34ffe20be5a8 columnFamilyName f 2023-07-24 06:10:50,970 INFO [StoreOpener-1b272067292f4e4c06ae34ffe20be5a8-1] regionserver.HStore(310): Store=1b272067292f4e4c06ae34ffe20be5a8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:50,970 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, ASSIGN in 375 msec 2023-07-24 06:10:50,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:50,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:50,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a090cc34aed3da6a81e8e13f552fca5c, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 06:10:50,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,973 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=2760c00ff18916a86d55ad84db67edbb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:50,973 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050973"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179050973"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179050973"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179050973"}]},"ts":"1690179050973"} 2023-07-24 06:10:50,974 INFO [StoreOpener-a090cc34aed3da6a81e8e13f552fca5c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:50,976 DEBUG [StoreOpener-a090cc34aed3da6a81e8e13f552fca5c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/f 2023-07-24 06:10:50,977 DEBUG [StoreOpener-a090cc34aed3da6a81e8e13f552fca5c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/f 2023-07-24 06:10:50,977 INFO [StoreOpener-a090cc34aed3da6a81e8e13f552fca5c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a090cc34aed3da6a81e8e13f552fca5c columnFamilyName f 2023-07-24 06:10:50,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-24 06:10:50,978 INFO [StoreOpener-a090cc34aed3da6a81e8e13f552fca5c-1] regionserver.HStore(310): Store=a090cc34aed3da6a81e8e13f552fca5c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:50,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; OpenRegionProcedure 2760c00ff18916a86d55ad84db67edbb, server=jenkins-hbase4.apache.org,37173,1690179042942 in 215 msec 2023-07-24 06:10:50,980 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, ASSIGN in 388 msec 2023-07-24 06:10:50,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:50,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,981 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b272067292f4e4c06ae34ffe20be5a8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10937205440, jitterRate=0.018606632947921753}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:50,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b272067292f4e4c06ae34ffe20be5a8: 2023-07-24 06:10:50,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,982 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8., pid=58, masterSystemTime=1690179050911 2023-07-24 06:10:50,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:50,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:50,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => affbcac39e46e9e55089a77148675000, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 06:10:50,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop affbcac39e46e9e55089a77148675000 2023-07-24 06:10:50,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:50,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for affbcac39e46e9e55089a77148675000 2023-07-24 06:10:50,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for affbcac39e46e9e55089a77148675000 2023-07-24 06:10:50,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:50,987 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=1b272067292f4e4c06ae34ffe20be5a8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:50,987 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179050987"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179050987"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179050987"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179050987"}]},"ts":"1690179050987"} 2023-07-24 06:10:50,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=51 2023-07-24 06:10:50,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=51, state=SUCCESS; OpenRegionProcedure 1b272067292f4e4c06ae34ffe20be5a8, server=jenkins-hbase4.apache.org,34793,1690179046626 in 225 msec 2023-07-24 06:10:50,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:50,995 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, ASSIGN in 406 msec 2023-07-24 06:10:50,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a090cc34aed3da6a81e8e13f552fca5c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10687235680, jitterRate=-0.004673615097999573}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:50,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a090cc34aed3da6a81e8e13f552fca5c: 2023-07-24 06:10:50,997 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c., pid=59, masterSystemTime=1690179050918 2023-07-24 06:10:51,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:51,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:51,000 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=a090cc34aed3da6a81e8e13f552fca5c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:51,001 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179051000"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179051000"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179051000"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179051000"}]},"ts":"1690179051000"} 2023-07-24 06:10:51,000 INFO [StoreOpener-affbcac39e46e9e55089a77148675000-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region affbcac39e46e9e55089a77148675000 2023-07-24 06:10:51,007 DEBUG [StoreOpener-affbcac39e46e9e55089a77148675000-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/f 2023-07-24 06:10:51,007 DEBUG [StoreOpener-affbcac39e46e9e55089a77148675000-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/f 2023-07-24 06:10:51,007 INFO [StoreOpener-affbcac39e46e9e55089a77148675000-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region affbcac39e46e9e55089a77148675000 columnFamilyName f 2023-07-24 06:10:51,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=50 2023-07-24 06:10:51,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=50, state=SUCCESS; OpenRegionProcedure a090cc34aed3da6a81e8e13f552fca5c, server=jenkins-hbase4.apache.org,37173,1690179042942 in 236 msec 2023-07-24 06:10:51,008 INFO [StoreOpener-affbcac39e46e9e55089a77148675000-1] regionserver.HStore(310): Store=affbcac39e46e9e55089a77148675000/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:51,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 2023-07-24 06:10:51,010 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, ASSIGN in 421 msec 2023-07-24 06:10:51,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 2023-07-24 06:10:51,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for affbcac39e46e9e55089a77148675000 2023-07-24 06:10:51,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:51,018 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened affbcac39e46e9e55089a77148675000; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11609315520, jitterRate=0.08120176196098328}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:51,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for affbcac39e46e9e55089a77148675000: 2023-07-24 06:10:51,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000., pid=56, masterSystemTime=1690179050911 2023-07-24 06:10:51,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:51,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:51,021 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=affbcac39e46e9e55089a77148675000, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:51,021 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179051021"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179051021"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179051021"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179051021"}]},"ts":"1690179051021"} 2023-07-24 06:10:51,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=54 2023-07-24 06:10:51,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=54, state=SUCCESS; OpenRegionProcedure affbcac39e46e9e55089a77148675000, server=jenkins-hbase4.apache.org,34793,1690179046626 in 264 msec 2023-07-24 06:10:51,029 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-24 06:10:51,029 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, ASSIGN in 436 msec 2023-07-24 06:10:51,029 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179051029"}]},"ts":"1690179051029"} 2023-07-24 06:10:51,031 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 06:10:51,033 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-24 06:10:51,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.2920 sec 2023-07-24 06:10:51,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 06:10:51,895 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-24 06:10:51,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:51,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:51,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:51,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:51,899 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:51,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:51,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:51,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 06:10:51,906 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179051906"}]},"ts":"1690179051906"} 2023-07-24 06:10:51,908 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 06:10:51,910 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 06:10:51,911 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, UNASSIGN}] 2023-07-24 06:10:51,913 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, UNASSIGN 2023-07-24 06:10:51,913 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, UNASSIGN 2023-07-24 06:10:51,913 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, UNASSIGN 2023-07-24 06:10:51,913 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, UNASSIGN 2023-07-24 06:10:51,913 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, UNASSIGN 2023-07-24 06:10:51,914 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=affbcac39e46e9e55089a77148675000, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:51,914 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=1b272067292f4e4c06ae34ffe20be5a8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:51,914 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179051914"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179051914"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179051914"}]},"ts":"1690179051914"} 2023-07-24 06:10:51,914 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=2760c00ff18916a86d55ad84db67edbb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:51,915 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179051914"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179051914"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179051914"}]},"ts":"1690179051914"} 2023-07-24 06:10:51,915 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179051914"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179051914"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179051914"}]},"ts":"1690179051914"} 2023-07-24 06:10:51,914 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=a090cc34aed3da6a81e8e13f552fca5c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:51,915 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179051914"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179051914"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179051914"}]},"ts":"1690179051914"} 2023-07-24 06:10:51,914 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=2919f6f5516fb40a66dc60bccfa6ead1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:51,915 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179051914"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179051914"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179051914"}]},"ts":"1690179051914"} 2023-07-24 06:10:51,916 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=65, state=RUNNABLE; CloseRegionProcedure affbcac39e46e9e55089a77148675000, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:51,917 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=63, state=RUNNABLE; CloseRegionProcedure 2760c00ff18916a86d55ad84db67edbb, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:51,919 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; CloseRegionProcedure 1b272067292f4e4c06ae34ffe20be5a8, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:51,920 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=61, state=RUNNABLE; CloseRegionProcedure a090cc34aed3da6a81e8e13f552fca5c, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:51,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=64, state=RUNNABLE; CloseRegionProcedure 2919f6f5516fb40a66dc60bccfa6ead1, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:52,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 06:10:52,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:52,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2919f6f5516fb40a66dc60bccfa6ead1, disabling compactions & flushes 2023-07-24 06:10:52,075 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:52,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:52,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. after waiting 0 ms 2023-07-24 06:10:52,075 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:52,075 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:52,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a090cc34aed3da6a81e8e13f552fca5c, disabling compactions & flushes 2023-07-24 06:10:52,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:52,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:52,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. after waiting 0 ms 2023-07-24 06:10:52,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:52,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:52,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:52,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1. 2023-07-24 06:10:52,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c. 2023-07-24 06:10:52,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2919f6f5516fb40a66dc60bccfa6ead1: 2023-07-24 06:10:52,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a090cc34aed3da6a81e8e13f552fca5c: 2023-07-24 06:10:52,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:52,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:52,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b272067292f4e4c06ae34ffe20be5a8, disabling compactions & flushes 2023-07-24 06:10:52,086 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=2919f6f5516fb40a66dc60bccfa6ead1, regionState=CLOSED 2023-07-24 06:10:52,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:52,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:52,086 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179052085"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179052085"}]},"ts":"1690179052085"} 2023-07-24 06:10:52,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. after waiting 0 ms 2023-07-24 06:10:52,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:52,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:52,087 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:52,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2760c00ff18916a86d55ad84db67edbb, disabling compactions & flushes 2023-07-24 06:10:52,087 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:52,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:52,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. after waiting 0 ms 2023-07-24 06:10:52,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:52,088 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=a090cc34aed3da6a81e8e13f552fca5c, regionState=CLOSED 2023-07-24 06:10:52,088 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179052088"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179052088"}]},"ts":"1690179052088"} 2023-07-24 06:10:52,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=64 2023-07-24 06:10:52,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=64, state=SUCCESS; CloseRegionProcedure 2919f6f5516fb40a66dc60bccfa6ead1, server=jenkins-hbase4.apache.org,34793,1690179046626 in 168 msec 2023-07-24 06:10:52,094 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=61 2023-07-24 06:10:52,094 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=61, state=SUCCESS; CloseRegionProcedure a090cc34aed3da6a81e8e13f552fca5c, server=jenkins-hbase4.apache.org,37173,1690179042942 in 170 msec 2023-07-24 06:10:52,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a090cc34aed3da6a81e8e13f552fca5c, UNASSIGN in 183 msec 2023-07-24 06:10:52,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:52,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:52,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8. 2023-07-24 06:10:52,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b272067292f4e4c06ae34ffe20be5a8: 2023-07-24 06:10:52,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb. 2023-07-24 06:10:52,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2760c00ff18916a86d55ad84db67edbb: 2023-07-24 06:10:52,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2919f6f5516fb40a66dc60bccfa6ead1, UNASSIGN in 182 msec 2023-07-24 06:10:52,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:52,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close affbcac39e46e9e55089a77148675000 2023-07-24 06:10:52,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing affbcac39e46e9e55089a77148675000, disabling compactions & flushes 2023-07-24 06:10:52,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:52,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:52,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. after waiting 0 ms 2023-07-24 06:10:52,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:52,102 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=1b272067292f4e4c06ae34ffe20be5a8, regionState=CLOSED 2023-07-24 06:10:52,102 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179052102"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179052102"}]},"ts":"1690179052102"} 2023-07-24 06:10:52,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:52,103 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=2760c00ff18916a86d55ad84db67edbb, regionState=CLOSED 2023-07-24 06:10:52,103 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690179052103"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179052103"}]},"ts":"1690179052103"} 2023-07-24 06:10:52,108 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-24 06:10:52,108 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=63 2023-07-24 06:10:52,108 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; CloseRegionProcedure 1b272067292f4e4c06ae34ffe20be5a8, server=jenkins-hbase4.apache.org,34793,1690179046626 in 185 msec 2023-07-24 06:10:52,108 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; CloseRegionProcedure 2760c00ff18916a86d55ad84db67edbb, server=jenkins-hbase4.apache.org,37173,1690179042942 in 188 msec 2023-07-24 06:10:52,110 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2760c00ff18916a86d55ad84db67edbb, UNASSIGN in 197 msec 2023-07-24 06:10:52,110 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1b272067292f4e4c06ae34ffe20be5a8, UNASSIGN in 197 msec 2023-07-24 06:10:52,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:52,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000. 2023-07-24 06:10:52,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for affbcac39e46e9e55089a77148675000: 2023-07-24 06:10:52,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed affbcac39e46e9e55089a77148675000 2023-07-24 06:10:52,114 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=affbcac39e46e9e55089a77148675000, regionState=CLOSED 2023-07-24 06:10:52,114 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690179052114"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179052114"}]},"ts":"1690179052114"} 2023-07-24 06:10:52,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=65 2023-07-24 06:10:52,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=65, state=SUCCESS; CloseRegionProcedure affbcac39e46e9e55089a77148675000, server=jenkins-hbase4.apache.org,34793,1690179046626 in 200 msec 2023-07-24 06:10:52,120 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=60 2023-07-24 06:10:52,120 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=affbcac39e46e9e55089a77148675000, UNASSIGN in 207 msec 2023-07-24 06:10:52,121 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179052121"}]},"ts":"1690179052121"} 2023-07-24 06:10:52,123 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 06:10:52,125 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 06:10:52,128 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 226 msec 2023-07-24 06:10:52,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 06:10:52,210 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-24 06:10:52,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,227 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1909395056' 2023-07-24 06:10:52,229 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:52,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:52,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 06:10:52,245 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:52,245 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:52,245 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:52,245 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:52,245 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 2023-07-24 06:10:52,249 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/recovered.edits] 2023-07-24 06:10:52,249 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/recovered.edits] 2023-07-24 06:10:52,249 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/recovered.edits] 2023-07-24 06:10:52,249 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/recovered.edits] 2023-07-24 06:10:52,250 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/recovered.edits] 2023-07-24 06:10:52,259 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c/recovered.edits/4.seqid 2023-07-24 06:10:52,260 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1/recovered.edits/4.seqid 2023-07-24 06:10:52,261 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000/recovered.edits/4.seqid 2023-07-24 06:10:52,261 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a090cc34aed3da6a81e8e13f552fca5c 2023-07-24 06:10:52,262 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2919f6f5516fb40a66dc60bccfa6ead1 2023-07-24 06:10:52,262 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8/recovered.edits/4.seqid 2023-07-24 06:10:52,263 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb/recovered.edits/4.seqid 2023-07-24 06:10:52,263 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/affbcac39e46e9e55089a77148675000 2023-07-24 06:10:52,263 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1b272067292f4e4c06ae34ffe20be5a8 2023-07-24 06:10:52,263 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2760c00ff18916a86d55ad84db67edbb 2023-07-24 06:10:52,264 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 06:10:52,271 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,278 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 06:10:52,281 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 06:10:52,282 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,282 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 06:10:52,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179052282"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:52,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179052282"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:52,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179052282"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:52,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179052282"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:52,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179052282"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:52,285 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 06:10:52,285 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a090cc34aed3da6a81e8e13f552fca5c, NAME => 'Group_testTableMoveTruncateAndDrop,,1690179049825.a090cc34aed3da6a81e8e13f552fca5c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 1b272067292f4e4c06ae34ffe20be5a8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690179049825.1b272067292f4e4c06ae34ffe20be5a8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 2760c00ff18916a86d55ad84db67edbb, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690179049825.2760c00ff18916a86d55ad84db67edbb.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 2919f6f5516fb40a66dc60bccfa6ead1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690179049825.2919f6f5516fb40a66dc60bccfa6ead1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => affbcac39e46e9e55089a77148675000, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690179049825.affbcac39e46e9e55089a77148675000.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 06:10:52,285 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 06:10:52,285 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179052285"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:52,287 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 06:10:52,290 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 06:10:52,291 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 73 msec 2023-07-24 06:10:52,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 06:10:52,346 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-24 06:10:52,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:52,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:52,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,354 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:52,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:52,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:52,356 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:10:52,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:52,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:52,362 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1909395056, current retry=0 2023-07-24 06:10:52,362 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:52,362 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1909395056 => default 2023-07-24 06:10:52,362 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:52,369 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1909395056 2023-07-24 06:10:52,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:10:52,382 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:52,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:52,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:52,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:52,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:52,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:52,386 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:10:52,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:10:52,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:52,398 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:10:52,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:52,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:52,409 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:52,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,416 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:52,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180252416, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:52,417 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:52,419 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:52,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,420 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:52,421 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:52,421 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:52,451 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=491 (was 424) Potentially hanging thread: hconnection-0x369a3209-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1601604286_17 at /127.0.0.1:48136 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:41501 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1601604286_17 at /127.0.0.1:40670 [Receiving block BP-1478983737-172.31.14.131-1690179036409:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3cbddc65-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1601604286_17 at /127.0.0.1:39228 [Receiving block BP-1478983737-172.31.14.131-1690179036409:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34793Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp946031351-636 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34793-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1601604286_17 at /127.0.0.1:40132 [Receiving block BP-1478983737-172.31.14.131-1690179036409:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1478983737-172.31.14.131-1690179036409:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-637-acceptor-0@6e9dbf27-ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:36883} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34793 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54990@0x2dd2b676-SendThread(127.0.0.1:54990) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54990@0x2dd2b676-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1478983737-172.31.14.131-1690179036409:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:41501 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x369a3209-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50-prefix:jenkins-hbase4.apache.org,34793,1690179046626 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54990@0x2dd2b676 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1478983737-172.31.14.131-1690179036409:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp946031351-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1017541935_17 at /127.0.0.1:56830 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=377 (was 375) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=6635 (was 7093) 2023-07-24 06:10:52,468 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=491, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=377, ProcessCount=177, AvailableMemoryMB=6634 2023-07-24 06:10:52,468 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-24 06:10:52,474 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,476 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:52,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:52,476 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:52,478 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:52,478 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:52,479 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:10:52,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:10:52,485 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:52,489 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:10:52,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:52,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:52,498 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:52,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,505 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:52,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180252505, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:52,506 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:52,508 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:52,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,510 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:52,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:52,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:52,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-24 06:10:52,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:53912 deadline: 1690180252513, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 06:10:52,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-24 06:10:52,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:53912 deadline: 1690180252514, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 06:10:52,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-24 06:10:52,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:53912 deadline: 1690180252516, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 06:10:52,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-24 06:10:52,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-24 06:10:52,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:52,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:52,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:52,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:52,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:52,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:52,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:52,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-24 06:10:52,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:10:52,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:52,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:52,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:52,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:52,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:52,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:52,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:10:52,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:10:52,558 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:52,562 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:10:52,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:52,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:52,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:52,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:52,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180252579, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:52,580 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:52,582 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:52,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,584 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:52,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:52,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:52,605 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=494 (was 491) Potentially hanging thread: hconnection-0x63197ba-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=377 (was 377), ProcessCount=177 (was 177), AvailableMemoryMB=6631 (was 6634) 2023-07-24 06:10:52,626 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=494, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=377, ProcessCount=177, AvailableMemoryMB=6630 2023-07-24 06:10:52,627 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-24 06:10:52,632 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,632 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:52,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:52,634 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:52,635 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:52,635 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:52,636 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:10:52,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:10:52,643 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:52,647 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:10:52,649 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:52,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:52,656 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:52,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:52,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:52,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180252662, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:52,663 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:52,665 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:52,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,666 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:52,667 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:52,667 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:52,668 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,668 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,669 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:52,669 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:52,670 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-24 06:10:52,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 06:10:52,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:52,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:52,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:52,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:52,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:34793] to rsgroup bar 2023-07-24 06:10:52,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:52,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 06:10:52,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:52,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:52,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(238): Moving server region 0aba53baeae40b1c65e437bbd16090b8, which do not belong to RSGroup bar 2023-07-24 06:10:52,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, REOPEN/MOVE 2023-07-24 06:10:52,698 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 06:10:52,700 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, REOPEN/MOVE 2023-07-24 06:10:52,701 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=0aba53baeae40b1c65e437bbd16090b8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:10:52,701 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179052701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179052701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179052701"}]},"ts":"1690179052701"} 2023-07-24 06:10:52,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure 0aba53baeae40b1c65e437bbd16090b8, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:10:52,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:52,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0aba53baeae40b1c65e437bbd16090b8, disabling compactions & flushes 2023-07-24 06:10:52,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:52,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:52,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. after waiting 0 ms 2023-07-24 06:10:52,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:52,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0aba53baeae40b1c65e437bbd16090b8 1/1 column families, dataSize=6.37 KB heapSize=10.52 KB 2023-07-24 06:10:53,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.37 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/.tmp/m/db3adbabf0684f66bd60eb1086eed389 2023-07-24 06:10:53,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db3adbabf0684f66bd60eb1086eed389 2023-07-24 06:10:53,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/.tmp/m/db3adbabf0684f66bd60eb1086eed389 as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m/db3adbabf0684f66bd60eb1086eed389 2023-07-24 06:10:53,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db3adbabf0684f66bd60eb1086eed389 2023-07-24 06:10:53,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m/db3adbabf0684f66bd60eb1086eed389, entries=9, sequenceid=26, filesize=5.5 K 2023-07-24 06:10:53,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.37 KB/6527, heapSize ~10.50 KB/10752, currentSize=0 B/0 for 0aba53baeae40b1c65e437bbd16090b8 in 553ms, sequenceid=26, compaction requested=false 2023-07-24 06:10:53,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 06:10:53,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:10:53,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:53,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0aba53baeae40b1c65e437bbd16090b8: 2023-07-24 06:10:53,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0aba53baeae40b1c65e437bbd16090b8 move to jenkins-hbase4.apache.org,40449,1690179042726 record at close sequenceid=26 2023-07-24 06:10:53,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,443 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=0aba53baeae40b1c65e437bbd16090b8, regionState=CLOSED 2023-07-24 06:10:53,443 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179053443"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179053443"}]},"ts":"1690179053443"} 2023-07-24 06:10:53,447 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-24 06:10:53,447 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure 0aba53baeae40b1c65e437bbd16090b8, server=jenkins-hbase4.apache.org,38203,1690179042473 in 742 msec 2023-07-24 06:10:53,448 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:10:53,599 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=0aba53baeae40b1c65e437bbd16090b8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:53,599 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179053599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179053599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179053599"}]},"ts":"1690179053599"} 2023-07-24 06:10:53,602 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=72, state=RUNNABLE; OpenRegionProcedure 0aba53baeae40b1c65e437bbd16090b8, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:53,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-24 06:10:53,758 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:53,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0aba53baeae40b1c65e437bbd16090b8, NAME => 'hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:53,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:10:53,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. service=MultiRowMutationService 2023-07-24 06:10:53,758 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 06:10:53,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:53,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,761 INFO [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,762 DEBUG [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m 2023-07-24 06:10:53,762 DEBUG [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m 2023-07-24 06:10:53,762 INFO [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0aba53baeae40b1c65e437bbd16090b8 columnFamilyName m 2023-07-24 06:10:53,771 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db3adbabf0684f66bd60eb1086eed389 2023-07-24 06:10:53,772 DEBUG [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] regionserver.HStore(539): loaded hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m/db3adbabf0684f66bd60eb1086eed389 2023-07-24 06:10:53,772 INFO [StoreOpener-0aba53baeae40b1c65e437bbd16090b8-1] regionserver.HStore(310): Store=0aba53baeae40b1c65e437bbd16090b8/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:53,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,775 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,778 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:10:53,780 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0aba53baeae40b1c65e437bbd16090b8; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@23d3d938, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:53,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0aba53baeae40b1c65e437bbd16090b8: 2023-07-24 06:10:53,780 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8., pid=74, masterSystemTime=1690179053754 2023-07-24 06:10:53,783 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:53,783 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:10:53,783 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=0aba53baeae40b1c65e437bbd16090b8, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:53,783 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179053783"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179053783"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179053783"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179053783"}]},"ts":"1690179053783"} 2023-07-24 06:10:53,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=72 2023-07-24 06:10:53,789 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=72, state=SUCCESS; OpenRegionProcedure 0aba53baeae40b1c65e437bbd16090b8, server=jenkins-hbase4.apache.org,40449,1690179042726 in 184 msec 2023-07-24 06:10:53,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0aba53baeae40b1c65e437bbd16090b8, REOPEN/MOVE in 1.0920 sec 2023-07-24 06:10:54,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942, jenkins-hbase4.apache.org,38203,1690179042473] are moved back to default 2023-07-24 06:10:54,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-24 06:10:54,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:54,702 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38203] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:56390 deadline: 1690179114701, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40449 startCode=1690179042726. As of locationSeqNum=26. 2023-07-24 06:10:54,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:54,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:54,843 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 06:10:54,843 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:54,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:54,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:54,851 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:10:54,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 75 2023-07-24 06:10:54,852 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38203] ipc.CallRunner(144): callId: 180 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:56378 deadline: 1690179114852, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40449 startCode=1690179042726. As of locationSeqNum=26. 2023-07-24 06:10:54,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 06:10:54,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 06:10:54,960 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:54,960 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 06:10:54,961 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:54,961 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:54,966 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:10:54,968 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:54,968 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc empty. 2023-07-24 06:10:54,969 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:54,969 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 06:10:54,989 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:54,990 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b2c2b2e50113f02f3b5fb4026368f0fc, NAME => 'Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:55,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:55,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing b2c2b2e50113f02f3b5fb4026368f0fc, disabling compactions & flushes 2023-07-24 06:10:55,007 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. after waiting 0 ms 2023-07-24 06:10:55,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,007 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:55,010 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:10:55,011 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179055011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179055011"}]},"ts":"1690179055011"} 2023-07-24 06:10:55,013 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:10:55,017 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:10:55,017 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179055017"}]},"ts":"1690179055017"} 2023-07-24 06:10:55,019 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-24 06:10:55,023 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, ASSIGN}] 2023-07-24 06:10:55,028 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, ASSIGN 2023-07-24 06:10:55,029 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:10:55,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 06:10:55,180 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:55,180 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179055180"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179055180"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179055180"}]},"ts":"1690179055180"} 2023-07-24 06:10:55,186 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE; OpenRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:55,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b2c2b2e50113f02f3b5fb4026368f0fc, NAME => 'Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:55,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:55,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,344 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,346 DEBUG [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f 2023-07-24 06:10:55,346 DEBUG [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f 2023-07-24 06:10:55,347 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b2c2b2e50113f02f3b5fb4026368f0fc columnFamilyName f 2023-07-24 06:10:55,347 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] regionserver.HStore(310): Store=b2c2b2e50113f02f3b5fb4026368f0fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:55,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:55,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b2c2b2e50113f02f3b5fb4026368f0fc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10623771040, jitterRate=-0.010584220290184021}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:55,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:55,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc., pid=77, masterSystemTime=1690179055338 2023-07-24 06:10:55,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,360 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:55,360 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179055360"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179055360"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179055360"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179055360"}]},"ts":"1690179055360"} 2023-07-24 06:10:55,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-24 06:10:55,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; OpenRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726 in 179 msec 2023-07-24 06:10:55,367 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-24 06:10:55,367 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, ASSIGN in 341 msec 2023-07-24 06:10:55,368 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:10:55,368 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179055368"}]},"ts":"1690179055368"} 2023-07-24 06:10:55,369 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-24 06:10:55,372 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:10:55,373 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 526 msec 2023-07-24 06:10:55,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-24 06:10:55,459 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 75 completed 2023-07-24 06:10:55,460 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-24 06:10:55,460 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:55,473 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-24 06:10:55,473 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:55,474 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-24 06:10:55,477 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-24 06:10:55,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:55,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 06:10:55,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:55,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:55,492 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-24 06:10:55,492 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region b2c2b2e50113f02f3b5fb4026368f0fc to RSGroup bar 2023-07-24 06:10:55,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:55,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:55,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:55,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:55,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 06:10:55,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:55,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE 2023-07-24 06:10:55,494 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-24 06:10:55,496 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE 2023-07-24 06:10:55,497 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:55,497 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179055497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179055497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179055497"}]},"ts":"1690179055497"} 2023-07-24 06:10:55,500 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:55,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b2c2b2e50113f02f3b5fb4026368f0fc, disabling compactions & flushes 2023-07-24 06:10:55,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. after waiting 0 ms 2023-07-24 06:10:55,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:55,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:55,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b2c2b2e50113f02f3b5fb4026368f0fc move to jenkins-hbase4.apache.org,34793,1690179046626 record at close sequenceid=2 2023-07-24 06:10:55,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,662 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=CLOSED 2023-07-24 06:10:55,662 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179055662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179055662"}]},"ts":"1690179055662"} 2023-07-24 06:10:55,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-24 06:10:55,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726 in 165 msec 2023-07-24 06:10:55,667 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:55,817 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:10:55,818 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:55,818 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179055818"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179055818"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179055818"}]},"ts":"1690179055818"} 2023-07-24 06:10:55,822 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:55,984 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:55,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b2c2b2e50113f02f3b5fb4026368f0fc, NAME => 'Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:55,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:55,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,989 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,991 DEBUG [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f 2023-07-24 06:10:55,991 DEBUG [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f 2023-07-24 06:10:55,991 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b2c2b2e50113f02f3b5fb4026368f0fc columnFamilyName f 2023-07-24 06:10:55,992 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] regionserver.HStore(310): Store=b2c2b2e50113f02f3b5fb4026368f0fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:55,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:55,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b2c2b2e50113f02f3b5fb4026368f0fc; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9605805920, jitterRate=-0.10538960993289948}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:55,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:56,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc., pid=80, masterSystemTime=1690179055976 2023-07-24 06:10:56,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:56,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:56,002 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:56,002 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179056002"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179056002"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179056002"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179056002"}]},"ts":"1690179056002"} 2023-07-24 06:10:56,007 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-24 06:10:56,007 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,34793,1690179046626 in 183 msec 2023-07-24 06:10:56,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE in 514 msec 2023-07-24 06:10:56,422 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 06:10:56,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-24 06:10:56,496 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-24 06:10:56,496 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:56,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:56,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:56,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 06:10:56,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:56,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 06:10:56,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:56,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:53912 deadline: 1690180256508, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-24 06:10:56,510 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:10:56,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:56,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:53912 deadline: 1690180256510, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-24 06:10:56,513 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-24 06:10:56,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:56,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 06:10:56,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:56,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:56,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-24 06:10:56,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region b2c2b2e50113f02f3b5fb4026368f0fc to RSGroup default 2023-07-24 06:10:56,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE 2023-07-24 06:10:56,521 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 06:10:56,522 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE 2023-07-24 06:10:56,523 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:56,523 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179056523"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179056523"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179056523"}]},"ts":"1690179056523"} 2023-07-24 06:10:56,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:56,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:56,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b2c2b2e50113f02f3b5fb4026368f0fc, disabling compactions & flushes 2023-07-24 06:10:56,681 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:56,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:56,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. after waiting 0 ms 2023-07-24 06:10:56,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:56,685 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:10:56,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:56,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:56,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b2c2b2e50113f02f3b5fb4026368f0fc move to jenkins-hbase4.apache.org,40449,1690179042726 record at close sequenceid=5 2023-07-24 06:10:56,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:56,690 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=CLOSED 2023-07-24 06:10:56,690 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179056690"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179056690"}]},"ts":"1690179056690"} 2023-07-24 06:10:56,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-24 06:10:56,694 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,34793,1690179046626 in 167 msec 2023-07-24 06:10:56,694 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:10:56,845 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:56,845 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179056845"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179056845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179056845"}]},"ts":"1690179056845"} 2023-07-24 06:10:56,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:57,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b2c2b2e50113f02f3b5fb4026368f0fc, NAME => 'Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:57,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:57,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,012 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,013 DEBUG [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f 2023-07-24 06:10:57,014 DEBUG [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f 2023-07-24 06:10:57,014 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b2c2b2e50113f02f3b5fb4026368f0fc columnFamilyName f 2023-07-24 06:10:57,015 INFO [StoreOpener-b2c2b2e50113f02f3b5fb4026368f0fc-1] regionserver.HStore(310): Store=b2c2b2e50113f02f3b5fb4026368f0fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:57,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b2c2b2e50113f02f3b5fb4026368f0fc; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11758216800, jitterRate=0.0950692743062973}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:57,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:57,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc., pid=83, masterSystemTime=1690179056999 2023-07-24 06:10:57,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,028 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:57,028 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179057028"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179057028"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179057028"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179057028"}]},"ts":"1690179057028"} 2023-07-24 06:10:57,031 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-24 06:10:57,032 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726 in 183 msec 2023-07-24 06:10:57,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, REOPEN/MOVE in 513 msec 2023-07-24 06:10:57,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-24 06:10:57,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-24 06:10:57,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:57,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:57,528 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:57,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 06:10:57,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:57,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:53912 deadline: 1690180257531, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-24 06:10:57,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:10:57,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:57,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 06:10:57,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:57,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-24 06:10:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942, jenkins-hbase4.apache.org,38203,1690179042473] are moved back to bar 2023-07-24 06:10:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-24 06:10:57,543 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:57,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:57,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:57,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 06:10:57,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:57,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:57,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:10:57,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:57,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:57,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:57,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:57,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:57,567 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-24 06:10:57,568 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-24 06:10:57,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,575 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179057575"}]},"ts":"1690179057575"} 2023-07-24 06:10:57,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-24 06:10:57,578 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-24 06:10:57,581 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-24 06:10:57,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, UNASSIGN}] 2023-07-24 06:10:57,585 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, UNASSIGN 2023-07-24 06:10:57,586 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:10:57,586 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179057586"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179057586"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179057586"}]},"ts":"1690179057586"} 2023-07-24 06:10:57,588 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; CloseRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:10:57,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-24 06:10:57,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b2c2b2e50113f02f3b5fb4026368f0fc, disabling compactions & flushes 2023-07-24 06:10:57,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. after waiting 0 ms 2023-07-24 06:10:57,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 06:10:57,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc. 2023-07-24 06:10:57,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b2c2b2e50113f02f3b5fb4026368f0fc: 2023-07-24 06:10:57,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,758 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b2c2b2e50113f02f3b5fb4026368f0fc, regionState=CLOSED 2023-07-24 06:10:57,758 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690179057758"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179057758"}]},"ts":"1690179057758"} 2023-07-24 06:10:57,762 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-24 06:10:57,762 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; CloseRegionProcedure b2c2b2e50113f02f3b5fb4026368f0fc, server=jenkins-hbase4.apache.org,40449,1690179042726 in 172 msec 2023-07-24 06:10:57,764 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-24 06:10:57,765 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b2c2b2e50113f02f3b5fb4026368f0fc, UNASSIGN in 180 msec 2023-07-24 06:10:57,765 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179057765"}]},"ts":"1690179057765"} 2023-07-24 06:10:57,767 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-24 06:10:57,769 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-24 06:10:57,771 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 202 msec 2023-07-24 06:10:57,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-24 06:10:57,879 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-24 06:10:57,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-24 06:10:57,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,883 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-24 06:10:57,884 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=87, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:57,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:57,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:57,889 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,891 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits] 2023-07-24 06:10:57,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-24 06:10:57,898 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits/10.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc/recovered.edits/10.seqid 2023-07-24 06:10:57,899 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testFailRemoveGroup/b2c2b2e50113f02f3b5fb4026368f0fc 2023-07-24 06:10:57,899 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 06:10:57,902 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=87, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,905 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-24 06:10:57,908 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-24 06:10:57,909 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=87, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,909 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-24 06:10:57,910 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179057910"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:57,912 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 06:10:57,912 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b2c2b2e50113f02f3b5fb4026368f0fc, NAME => 'Group_testFailRemoveGroup,,1690179054846.b2c2b2e50113f02f3b5fb4026368f0fc.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 06:10:57,912 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-24 06:10:57,912 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179057912"}]},"ts":"9223372036854775807"} 2023-07-24 06:10:57,914 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-24 06:10:57,916 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=87, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 06:10:57,917 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 36 msec 2023-07-24 06:10:57,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-24 06:10:57,995 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-24 06:10:57,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:57,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,000 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:58,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:58,000 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:58,001 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:58,001 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:58,002 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:10:58,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:10:58,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:58,012 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:10:58,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:58,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:58,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:58,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:58,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:58,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:58,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180258033, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:58,034 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:58,035 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:58,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,037 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:58,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:58,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:58,059 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=500 (was 494) Potentially hanging thread: hconnection-0x369a3209-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1601604286_17 at /127.0.0.1:50964 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2231fec8-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-854381483_17 at /127.0.0.1:48178 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=778 (was 776) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=371 (was 377), ProcessCount=177 (was 177), AvailableMemoryMB=6382 (was 6630) 2023-07-24 06:10:58,078 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500, OpenFileDescriptor=778, MaxFileDescriptor=60000, SystemLoadAverage=371, ProcessCount=177, AvailableMemoryMB=6382 2023-07-24 06:10:58,079 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-24 06:10:58,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,085 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:10:58,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:10:58,085 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:10:58,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:10:58,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:58,087 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:10:58,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:10:58,094 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:10:58,097 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:10:58,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:10:58,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:58,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:10:58,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:58,108 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,108 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:10:58,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:10:58,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180258111, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:10:58,111 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:10:58,115 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:58,116 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,116 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,116 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:10:58,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:58,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:58,118 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:10:58,118 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:58,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:58,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:58,131 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:10:58,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,139 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34793] to rsgroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:58,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:58,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:10:58,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626] are moved back to default 2023-07-24 06:10:58,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:10:58,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:10:58,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:10:58,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,152 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:10:58,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:58,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:10:58,158 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:10:58,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 88 2023-07-24 06:10:58,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 06:10:58,161 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,162 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:58,162 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,163 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:58,165 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:10:58,167 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,168 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 empty. 2023-07-24 06:10:58,169 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,169 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 06:10:58,197 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:58,198 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => dfbbaa63b897dcf52163c11e8ab2a210, NAME => 'GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:58,221 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:58,221 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing dfbbaa63b897dcf52163c11e8ab2a210, disabling compactions & flushes 2023-07-24 06:10:58,222 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,222 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,222 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. after waiting 0 ms 2023-07-24 06:10:58,222 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,222 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,222 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for dfbbaa63b897dcf52163c11e8ab2a210: 2023-07-24 06:10:58,225 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:10:58,226 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179058226"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179058226"}]},"ts":"1690179058226"} 2023-07-24 06:10:58,229 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:10:58,231 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:10:58,231 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179058231"}]},"ts":"1690179058231"} 2023-07-24 06:10:58,236 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-24 06:10:58,240 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:58,240 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:58,240 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:58,240 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:58,240 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:58,241 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, ASSIGN}] 2023-07-24 06:10:58,243 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, ASSIGN 2023-07-24 06:10:58,244 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:58,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 06:10:58,395 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:10:58,396 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:58,397 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179058396"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179058396"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179058396"}]},"ts":"1690179058396"} 2023-07-24 06:10:58,399 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:58,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 06:10:58,556 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dfbbaa63b897dcf52163c11e8ab2a210, NAME => 'GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:58,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:58,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,558 INFO [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,564 DEBUG [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/f 2023-07-24 06:10:58,564 DEBUG [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/f 2023-07-24 06:10:58,564 INFO [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dfbbaa63b897dcf52163c11e8ab2a210 columnFamilyName f 2023-07-24 06:10:58,565 INFO [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] regionserver.HStore(310): Store=dfbbaa63b897dcf52163c11e8ab2a210/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:58,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:58,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:58,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dfbbaa63b897dcf52163c11e8ab2a210; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10071394560, jitterRate=-0.06202828884124756}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:58,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dfbbaa63b897dcf52163c11e8ab2a210: 2023-07-24 06:10:58,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210., pid=90, masterSystemTime=1690179058550 2023-07-24 06:10:58,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,578 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:58,578 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:58,578 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179058578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179058578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179058578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179058578"}]},"ts":"1690179058578"} 2023-07-24 06:10:58,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-24 06:10:58,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,37173,1690179042942 in 181 msec 2023-07-24 06:10:58,584 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-24 06:10:58,585 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, ASSIGN in 341 msec 2023-07-24 06:10:58,585 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:10:58,586 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179058585"}]},"ts":"1690179058585"} 2023-07-24 06:10:58,587 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-24 06:10:58,593 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:10:58,595 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 439 msec 2023-07-24 06:10:58,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 06:10:58,771 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 88 completed 2023-07-24 06:10:58,772 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-24 06:10:58,772 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:58,783 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-24 06:10:58,783 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:58,783 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-24 06:10:58,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:10:58,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:10:58,796 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:10:58,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 91 2023-07-24 06:10:58,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 06:10:58,803 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:58,803 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:58,804 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:10:58,804 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:58,807 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:10:58,809 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:58,810 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 empty. 2023-07-24 06:10:58,810 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:58,811 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 06:10:58,845 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-24 06:10:58,847 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 268b2c16be81f1933c3113045c14d813, NAME => 'GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:10:58,876 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:58,876 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 268b2c16be81f1933c3113045c14d813, disabling compactions & flushes 2023-07-24 06:10:58,876 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:58,876 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:58,876 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. after waiting 0 ms 2023-07-24 06:10:58,876 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:58,876 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:58,876 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 268b2c16be81f1933c3113045c14d813: 2023-07-24 06:10:58,879 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:10:58,880 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179058880"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179058880"}]},"ts":"1690179058880"} 2023-07-24 06:10:58,882 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:10:58,883 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:10:58,883 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179058883"}]},"ts":"1690179058883"} 2023-07-24 06:10:58,884 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-24 06:10:58,888 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:10:58,889 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:10:58,889 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:10:58,889 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:10:58,889 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:10:58,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, ASSIGN}] 2023-07-24 06:10:58,891 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, ASSIGN 2023-07-24 06:10:58,892 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37173,1690179042942; forceNewPlan=false, retain=false 2023-07-24 06:10:58,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 06:10:59,042 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:10:59,044 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:59,044 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059044"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179059044"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179059044"}]},"ts":"1690179059044"} 2023-07-24 06:10:59,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:59,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 06:10:59,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 268b2c16be81f1933c3113045c14d813, NAME => 'GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:59,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:59,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,206 INFO [StoreOpener-268b2c16be81f1933c3113045c14d813-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,208 DEBUG [StoreOpener-268b2c16be81f1933c3113045c14d813-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/f 2023-07-24 06:10:59,208 DEBUG [StoreOpener-268b2c16be81f1933c3113045c14d813-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/f 2023-07-24 06:10:59,208 INFO [StoreOpener-268b2c16be81f1933c3113045c14d813-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 268b2c16be81f1933c3113045c14d813 columnFamilyName f 2023-07-24 06:10:59,209 INFO [StoreOpener-268b2c16be81f1933c3113045c14d813-1] regionserver.HStore(310): Store=268b2c16be81f1933c3113045c14d813/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:59,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:10:59,217 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 268b2c16be81f1933c3113045c14d813; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10016904960, jitterRate=-0.06710302829742432}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:59,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 268b2c16be81f1933c3113045c14d813: 2023-07-24 06:10:59,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813., pid=93, masterSystemTime=1690179059198 2023-07-24 06:10:59,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,220 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:59,220 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059220"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179059220"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179059220"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179059220"}]},"ts":"1690179059220"} 2023-07-24 06:10:59,223 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-24 06:10:59,223 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,37173,1690179042942 in 175 msec 2023-07-24 06:10:59,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-24 06:10:59,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, ASSIGN in 334 msec 2023-07-24 06:10:59,228 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:10:59,228 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179059228"}]},"ts":"1690179059228"} 2023-07-24 06:10:59,231 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-24 06:10:59,233 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:10:59,235 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 447 msec 2023-07-24 06:10:59,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-24 06:10:59,407 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 91 completed 2023-07-24 06:10:59,407 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-24 06:10:59,407 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:59,411 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-24 06:10:59,411 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:59,411 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-24 06:10:59,412 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:10:59,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 06:10:59,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:10:59,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 06:10:59,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:10:59,426 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:10:59,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:10:59,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:10:59,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,436 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 268b2c16be81f1933c3113045c14d813 to RSGroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, REOPEN/MOVE 2023-07-24 06:10:59,437 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,438 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region dfbbaa63b897dcf52163c11e8ab2a210 to RSGroup Group_testMultiTableMove_1676326282 2023-07-24 06:10:59,438 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, REOPEN/MOVE 2023-07-24 06:10:59,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, REOPEN/MOVE 2023-07-24 06:10:59,439 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:59,440 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, REOPEN/MOVE 2023-07-24 06:10:59,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1676326282, current retry=0 2023-07-24 06:10:59,440 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059439"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179059439"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179059439"}]},"ts":"1690179059439"} 2023-07-24 06:10:59,441 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:10:59,441 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179059441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179059441"}]},"ts":"1690179059441"} 2023-07-24 06:10:59,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=94, state=RUNNABLE; CloseRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:59,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=95, state=RUNNABLE; CloseRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,37173,1690179042942}] 2023-07-24 06:10:59,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dfbbaa63b897dcf52163c11e8ab2a210, disabling compactions & flushes 2023-07-24 06:10:59,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. after waiting 0 ms 2023-07-24 06:10:59,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:59,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dfbbaa63b897dcf52163c11e8ab2a210: 2023-07-24 06:10:59,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dfbbaa63b897dcf52163c11e8ab2a210 move to jenkins-hbase4.apache.org,34793,1690179046626 record at close sequenceid=2 2023-07-24 06:10:59,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 268b2c16be81f1933c3113045c14d813, disabling compactions & flushes 2023-07-24 06:10:59,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. after waiting 0 ms 2023-07-24 06:10:59,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,615 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=CLOSED 2023-07-24 06:10:59,615 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179059615"}]},"ts":"1690179059615"} 2023-07-24 06:10:59,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:10:59,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 268b2c16be81f1933c3113045c14d813: 2023-07-24 06:10:59,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 268b2c16be81f1933c3113045c14d813 move to jenkins-hbase4.apache.org,34793,1690179046626 record at close sequenceid=2 2023-07-24 06:10:59,620 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=95 2023-07-24 06:10:59,620 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=95, state=SUCCESS; CloseRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,37173,1690179042942 in 172 msec 2023-07-24 06:10:59,621 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:59,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,623 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=CLOSED 2023-07-24 06:10:59,623 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179059623"}]},"ts":"1690179059623"} 2023-07-24 06:10:59,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=94 2023-07-24 06:10:59,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=94, state=SUCCESS; CloseRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,37173,1690179042942 in 182 msec 2023-07-24 06:10:59,627 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:10:59,771 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:59,771 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:59,772 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059771"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179059771"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179059771"}]},"ts":"1690179059771"} 2023-07-24 06:10:59,772 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059771"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179059771"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179059771"}]},"ts":"1690179059771"} 2023-07-24 06:10:59,773 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=94, state=RUNNABLE; OpenRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:59,774 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=95, state=RUNNABLE; OpenRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:10:59,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 268b2c16be81f1933c3113045c14d813, NAME => 'GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:59,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:59,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,934 INFO [StoreOpener-268b2c16be81f1933c3113045c14d813-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,936 DEBUG [StoreOpener-268b2c16be81f1933c3113045c14d813-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/f 2023-07-24 06:10:59,936 DEBUG [StoreOpener-268b2c16be81f1933c3113045c14d813-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/f 2023-07-24 06:10:59,936 INFO [StoreOpener-268b2c16be81f1933c3113045c14d813-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 268b2c16be81f1933c3113045c14d813 columnFamilyName f 2023-07-24 06:10:59,937 INFO [StoreOpener-268b2c16be81f1933c3113045c14d813-1] regionserver.HStore(310): Store=268b2c16be81f1933c3113045c14d813/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:59,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 268b2c16be81f1933c3113045c14d813 2023-07-24 06:10:59,945 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 268b2c16be81f1933c3113045c14d813; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11206532800, jitterRate=0.04368969798088074}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:59,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 268b2c16be81f1933c3113045c14d813: 2023-07-24 06:10:59,945 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813., pid=98, masterSystemTime=1690179059925 2023-07-24 06:10:59,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:10:59,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dfbbaa63b897dcf52163c11e8ab2a210, NAME => 'GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:10:59,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:10:59,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,949 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:59,949 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059949"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179059949"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179059949"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179059949"}]},"ts":"1690179059949"} 2023-07-24 06:10:59,950 INFO [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,951 DEBUG [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/f 2023-07-24 06:10:59,951 DEBUG [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/f 2023-07-24 06:10:59,951 INFO [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dfbbaa63b897dcf52163c11e8ab2a210 columnFamilyName f 2023-07-24 06:10:59,952 INFO [StoreOpener-dfbbaa63b897dcf52163c11e8ab2a210-1] regionserver.HStore(310): Store=dfbbaa63b897dcf52163c11e8ab2a210/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:10:59,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,953 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=94 2023-07-24 06:10:59,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=94, state=SUCCESS; OpenRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,34793,1690179046626 in 178 msec 2023-07-24 06:10:59,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,955 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, REOPEN/MOVE in 518 msec 2023-07-24 06:10:59,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:10:59,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dfbbaa63b897dcf52163c11e8ab2a210; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11620805280, jitterRate=0.08227182924747467}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:10:59,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dfbbaa63b897dcf52163c11e8ab2a210: 2023-07-24 06:10:59,960 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210., pid=99, masterSystemTime=1690179059925 2023-07-24 06:10:59,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:10:59,962 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:10:59,962 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179059962"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179059962"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179059962"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179059962"}]},"ts":"1690179059962"} 2023-07-24 06:10:59,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=95 2023-07-24 06:10:59,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=95, state=SUCCESS; OpenRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,34793,1690179046626 in 189 msec 2023-07-24 06:10:59,967 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, REOPEN/MOVE in 527 msec 2023-07-24 06:11:00,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=94 2023-07-24 06:11:00,441 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1676326282. 2023-07-24 06:11:00,441 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:00,445 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:00,445 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:00,448 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,448 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:00,449 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 06:11:00,449 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:00,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:00,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:00,451 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1676326282 2023-07-24 06:11:00,452 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:00,454 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-24 06:11:00,455 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-24 06:11:00,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 06:11:00,460 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179060460"}]},"ts":"1690179060460"} 2023-07-24 06:11:00,462 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-24 06:11:00,463 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-24 06:11:00,464 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, UNASSIGN}] 2023-07-24 06:11:00,466 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, UNASSIGN 2023-07-24 06:11:00,468 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:00,468 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179060467"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179060467"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179060467"}]},"ts":"1690179060467"} 2023-07-24 06:11:00,470 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; CloseRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:11:00,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 06:11:00,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:11:00,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dfbbaa63b897dcf52163c11e8ab2a210, disabling compactions & flushes 2023-07-24 06:11:00,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:11:00,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:11:00,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. after waiting 0 ms 2023-07-24 06:11:00,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:11:00,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:11:00,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210. 2023-07-24 06:11:00,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dfbbaa63b897dcf52163c11e8ab2a210: 2023-07-24 06:11:00,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:11:00,631 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=dfbbaa63b897dcf52163c11e8ab2a210, regionState=CLOSED 2023-07-24 06:11:00,631 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179060631"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179060631"}]},"ts":"1690179060631"} 2023-07-24 06:11:00,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-24 06:11:00,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; CloseRegionProcedure dfbbaa63b897dcf52163c11e8ab2a210, server=jenkins-hbase4.apache.org,34793,1690179046626 in 162 msec 2023-07-24 06:11:00,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-24 06:11:00,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=dfbbaa63b897dcf52163c11e8ab2a210, UNASSIGN in 170 msec 2023-07-24 06:11:00,639 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179060639"}]},"ts":"1690179060639"} 2023-07-24 06:11:00,640 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-24 06:11:00,642 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-24 06:11:00,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 188 msec 2023-07-24 06:11:00,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 06:11:00,768 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-24 06:11:00,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-24 06:11:00,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,772 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1676326282' 2023-07-24 06:11:00,774 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=103, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:00,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:00,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:11:00,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:00,782 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:11:00,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-24 06:11:00,785 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/recovered.edits] 2023-07-24 06:11:00,795 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210/recovered.edits/7.seqid 2023-07-24 06:11:00,798 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveA/dfbbaa63b897dcf52163c11e8ab2a210 2023-07-24 06:11:00,798 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 06:11:00,802 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=103, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,804 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-24 06:11:00,806 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-24 06:11:00,807 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=103, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,807 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-24 06:11:00,807 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179060807"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:00,809 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 06:11:00,809 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => dfbbaa63b897dcf52163c11e8ab2a210, NAME => 'GrouptestMultiTableMoveA,,1690179058154.dfbbaa63b897dcf52163c11e8ab2a210.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 06:11:00,809 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-24 06:11:00,809 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179060809"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:00,812 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-24 06:11:00,814 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=103, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 06:11:00,815 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 45 msec 2023-07-24 06:11:00,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-24 06:11:00,886 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-24 06:11:00,887 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-24 06:11:00,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-24 06:11:00,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:00,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 06:11:00,895 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179060895"}]},"ts":"1690179060895"} 2023-07-24 06:11:00,897 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-24 06:11:00,901 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-24 06:11:00,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, UNASSIGN}] 2023-07-24 06:11:00,904 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, UNASSIGN 2023-07-24 06:11:00,905 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:00,906 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179060905"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179060905"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179060905"}]},"ts":"1690179060905"} 2023-07-24 06:11:00,908 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:11:00,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 06:11:01,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 268b2c16be81f1933c3113045c14d813 2023-07-24 06:11:01,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 268b2c16be81f1933c3113045c14d813, disabling compactions & flushes 2023-07-24 06:11:01,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:11:01,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:11:01,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. after waiting 0 ms 2023-07-24 06:11:01,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:11:01,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:11:01,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813. 2023-07-24 06:11:01,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 268b2c16be81f1933c3113045c14d813: 2023-07-24 06:11:01,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 268b2c16be81f1933c3113045c14d813 2023-07-24 06:11:01,070 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=268b2c16be81f1933c3113045c14d813, regionState=CLOSED 2023-07-24 06:11:01,070 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690179061069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179061069"}]},"ts":"1690179061069"} 2023-07-24 06:11:01,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-24 06:11:01,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 268b2c16be81f1933c3113045c14d813, server=jenkins-hbase4.apache.org,34793,1690179046626 in 163 msec 2023-07-24 06:11:01,074 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-24 06:11:01,074 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=268b2c16be81f1933c3113045c14d813, UNASSIGN in 171 msec 2023-07-24 06:11:01,075 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179061075"}]},"ts":"1690179061075"} 2023-07-24 06:11:01,076 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-24 06:11:01,078 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-24 06:11:01,080 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 191 msec 2023-07-24 06:11:01,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 06:11:01,198 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-24 06:11:01,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-24 06:11:01,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:01,201 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:01,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1676326282' 2023-07-24 06:11:01,202 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:01,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:11:01,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,416 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:11:01,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 06:11:01,420 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/recovered.edits] 2023-07-24 06:11:01,429 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/recovered.edits/7.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813/recovered.edits/7.seqid 2023-07-24 06:11:01,429 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/GrouptestMultiTableMoveB/268b2c16be81f1933c3113045c14d813 2023-07-24 06:11:01,429 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 06:11:01,433 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:01,438 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-24 06:11:01,446 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-24 06:11:01,448 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:01,448 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-24 06:11:01,448 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179061448"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:01,451 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 06:11:01,451 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 268b2c16be81f1933c3113045c14d813, NAME => 'GrouptestMultiTableMoveB,,1690179058785.268b2c16be81f1933c3113045c14d813.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 06:11:01,451 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-24 06:11:01,452 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179061451"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:01,454 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-24 06:11:01,457 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 06:11:01,458 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 258 msec 2023-07-24 06:11:01,487 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 06:11:01,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 06:11:01,521 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-24 06:11:01,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,529 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:01,529 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:01,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:11:01,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:11:01,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:11:01,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1676326282 2023-07-24 06:11:01,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:01,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1676326282, current retry=0 2023-07-24 06:11:01,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626] are moved back to Group_testMultiTableMove_1676326282 2023-07-24 06:11:01,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1676326282 => default 2023-07-24 06:11:01,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1676326282 2023-07-24 06:11:01,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:01,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,560 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:01,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:01,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:01,568 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:01,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180261575, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:01,576 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:01,579 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:01,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,580 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:01,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,606 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500 (was 500), OpenFileDescriptor=749 (was 778), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 371) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=6204 (was 6382) 2023-07-24 06:11:01,624 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=500, OpenFileDescriptor=749, MaxFileDescriptor=60000, SystemLoadAverage=374, ProcessCount=177, AvailableMemoryMB=6204 2023-07-24 06:11:01,624 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-24 06:11:01,628 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,629 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:01,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:01,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:01,637 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,640 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:01,641 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:01,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:01,647 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:01,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180261653, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:01,654 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:01,656 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:01,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,658 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:01,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,661 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-24 06:11:01,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,674 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,674 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup oldGroup 2023-07-24 06:11:01,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,682 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:11:01,682 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to default 2023-07-24 06:11:01,682 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-24 06:11:01,682 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 06:11:01,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 06:11:01,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-24 06:11:01,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 06:11:01,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:01,698 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,704 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38203] to rsgroup anotherRSGroup 2023-07-24 06:11:01,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 06:11:01,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:01,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:11:01,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38203,1690179042473] are moved back to default 2023-07-24 06:11:01,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-24 06:11:01,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 06:11:01,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 06:11:01,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-24 06:11:01,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:53912 deadline: 1690180261722, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-24 06:11:01,724 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-24 06:11:01,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:53912 deadline: 1690180261724, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-24 06:11:01,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-24 06:11:01,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:53912 deadline: 1690180261725, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-24 06:11:01,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-24 06:11:01,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:53912 deadline: 1690180261726, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-24 06:11:01,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38203] to rsgroup default 2023-07-24 06:11:01,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 06:11:01,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:01,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-24 06:11:01,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38203,1690179042473] are moved back to anotherRSGroup 2023-07-24 06:11:01,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-24 06:11:01,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-24 06:11:01,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 06:11:01,743 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:11:01,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 06:11:01,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-24 06:11:01,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to oldGroup 2023-07-24 06:11:01,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-24 06:11:01,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,753 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-24 06:11:01,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:11:01,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:01,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,761 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:01,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:01,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,770 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:01,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:01,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:01,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:01,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180261787, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:01,788 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:01,790 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:01,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,791 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:01,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,813 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=503 (was 500) Potentially hanging thread: hconnection-0x63197ba-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=749 (was 749), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 374), ProcessCount=177 (was 177), AvailableMemoryMB=6200 (was 6204) 2023-07-24 06:11:01,813 WARN [Listener at localhost/46655] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-24 06:11:01,835 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=503, OpenFileDescriptor=749, MaxFileDescriptor=60000, SystemLoadAverage=374, ProcessCount=177, AvailableMemoryMB=6200 2023-07-24 06:11:01,835 WARN [Listener at localhost/46655] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-24 06:11:01,836 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-24 06:11:01,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,848 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:01,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:01,848 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:01,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:01,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:01,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:01,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:01,862 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:01,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:01,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:01,869 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,879 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:01,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:01,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180261879, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:01,881 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:01,883 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:01,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,884 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:01,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-24 06:11:01,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:01,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:01,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup oldgroup 2023-07-24 06:11:01,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:01,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:11:01,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to default 2023-07-24 06:11:01,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-24 06:11:01,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:01,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:01,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:01,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 06:11:01,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:01,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:01,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-24 06:11:01,929 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:01,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 108 2023-07-24 06:11:01,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 06:11:01,931 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:01,932 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:01,932 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:01,933 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:01,935 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:01,937 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:01,938 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f empty. 2023-07-24 06:11:01,939 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:01,939 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-24 06:11:01,955 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:01,956 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2370e0157d921fd1b13ab1255ffb9e5f, NAME => 'testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:01,969 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:01,969 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 2370e0157d921fd1b13ab1255ffb9e5f, disabling compactions & flushes 2023-07-24 06:11:01,969 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:01,969 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:01,969 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. after waiting 0 ms 2023-07-24 06:11:01,969 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:01,969 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:01,969 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:01,971 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:01,972 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179061972"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179061972"}]},"ts":"1690179061972"} 2023-07-24 06:11:01,975 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:01,975 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:01,976 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179061976"}]},"ts":"1690179061976"} 2023-07-24 06:11:01,977 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-24 06:11:01,981 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:01,982 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:01,982 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:01,982 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:01,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, ASSIGN}] 2023-07-24 06:11:01,987 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, ASSIGN 2023-07-24 06:11:01,987 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:11:02,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 06:11:02,138 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:02,139 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:02,140 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179062139"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179062139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179062139"}]},"ts":"1690179062139"} 2023-07-24 06:11:02,142 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE; OpenRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:02,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 06:11:02,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2370e0157d921fd1b13ab1255ffb9e5f, NAME => 'testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:02,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:02,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,301 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,302 DEBUG [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/tr 2023-07-24 06:11:02,302 DEBUG [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/tr 2023-07-24 06:11:02,303 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2370e0157d921fd1b13ab1255ffb9e5f columnFamilyName tr 2023-07-24 06:11:02,303 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] regionserver.HStore(310): Store=2370e0157d921fd1b13ab1255ffb9e5f/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:02,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:02,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2370e0157d921fd1b13ab1255ffb9e5f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12034864640, jitterRate=0.1208341121673584}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:02,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:02,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f., pid=110, masterSystemTime=1690179062294 2023-07-24 06:11:02,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,313 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:02,314 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179062313"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179062313"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179062313"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179062313"}]},"ts":"1690179062313"} 2023-07-24 06:11:02,316 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-24 06:11:02,316 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; OpenRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,40449,1690179042726 in 173 msec 2023-07-24 06:11:02,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-24 06:11:02,318 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, ASSIGN in 334 msec 2023-07-24 06:11:02,319 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:02,319 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179062319"}]},"ts":"1690179062319"} 2023-07-24 06:11:02,320 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-24 06:11:02,324 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:02,325 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; CreateTableProcedure table=testRename in 398 msec 2023-07-24 06:11:02,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-24 06:11:02,535 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 108 completed 2023-07-24 06:11:02,535 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-24 06:11:02,536 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:02,550 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-24 06:11:02,551 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:02,551 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-24 06:11:02,554 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-24 06:11:02,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:02,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:02,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:02,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:02,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-24 06:11:02,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 2370e0157d921fd1b13ab1255ffb9e5f to RSGroup oldgroup 2023-07-24 06:11:02,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:02,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:02,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:02,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:02,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:02,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE 2023-07-24 06:11:02,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-24 06:11:02,563 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE 2023-07-24 06:11:02,565 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:02,565 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179062565"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179062565"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179062565"}]},"ts":"1690179062565"} 2023-07-24 06:11:02,570 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:02,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2370e0157d921fd1b13ab1255ffb9e5f, disabling compactions & flushes 2023-07-24 06:11:02,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. after waiting 0 ms 2023-07-24 06:11:02,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:02,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:02,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:02,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2370e0157d921fd1b13ab1255ffb9e5f move to jenkins-hbase4.apache.org,34793,1690179046626 record at close sequenceid=2 2023-07-24 06:11:02,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:02,736 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=CLOSED 2023-07-24 06:11:02,736 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179062736"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179062736"}]},"ts":"1690179062736"} 2023-07-24 06:11:02,738 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-24 06:11:02,739 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,40449,1690179042726 in 167 msec 2023-07-24 06:11:02,739 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34793,1690179046626; forceNewPlan=false, retain=false 2023-07-24 06:11:02,890 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:02,891 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:02,891 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179062891"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179062891"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179062891"}]},"ts":"1690179062891"} 2023-07-24 06:11:02,893 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=111, state=RUNNABLE; OpenRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:11:03,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:03,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2370e0157d921fd1b13ab1255ffb9e5f, NAME => 'testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:03,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:03,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,053 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,054 DEBUG [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/tr 2023-07-24 06:11:03,054 DEBUG [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/tr 2023-07-24 06:11:03,055 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2370e0157d921fd1b13ab1255ffb9e5f columnFamilyName tr 2023-07-24 06:11:03,056 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] regionserver.HStore(310): Store=2370e0157d921fd1b13ab1255ffb9e5f/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:03,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:03,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2370e0157d921fd1b13ab1255ffb9e5f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11925083040, jitterRate=0.11060990393161774}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:03,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:03,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f., pid=113, masterSystemTime=1690179063046 2023-07-24 06:11:03,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:03,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:03,069 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:03,069 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179063069"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179063069"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179063069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179063069"}]},"ts":"1690179063069"} 2023-07-24 06:11:03,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=111 2023-07-24 06:11:03,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=111, state=SUCCESS; OpenRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,34793,1690179046626 in 178 msec 2023-07-24 06:11:03,076 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE in 513 msec 2023-07-24 06:11:03,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=111 2023-07-24 06:11:03,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-24 06:11:03,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:03,567 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:03,567 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:03,570 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:03,571 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 06:11:03,571 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:03,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 06:11:03,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:03,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 06:11:03,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:03,574 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:03,574 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:03,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-24 06:11:03,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:03,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:03,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:03,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:03,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:03,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:03,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:03,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:03,588 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38203] to rsgroup normal 2023-07-24 06:11:03,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:03,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:03,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:03,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:03,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:03,593 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:11:03,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38203,1690179042473] are moved back to default 2023-07-24 06:11:03,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-24 06:11:03,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:03,597 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:03,597 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:03,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 06:11:03,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:03,602 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:03,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-24 06:11:03,605 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:03,605 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 114 2023-07-24 06:11:03,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 06:11:03,607 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:03,608 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:03,608 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:03,608 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:03,609 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:03,617 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:03,619 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,620 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 empty. 2023-07-24 06:11:03,620 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,620 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-24 06:11:03,636 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:03,638 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => f7fdbbdd2ac0a663780a488a70ff77f3, NAME => 'unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:03,656 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:03,656 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing f7fdbbdd2ac0a663780a488a70ff77f3, disabling compactions & flushes 2023-07-24 06:11:03,656 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,656 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,656 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. after waiting 0 ms 2023-07-24 06:11:03,656 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,656 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,656 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:03,659 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:03,660 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179063660"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179063660"}]},"ts":"1690179063660"} 2023-07-24 06:11:03,662 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:03,663 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:03,663 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179063663"}]},"ts":"1690179063663"} 2023-07-24 06:11:03,665 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-24 06:11:03,669 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, ASSIGN}] 2023-07-24 06:11:03,672 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, ASSIGN 2023-07-24 06:11:03,673 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:11:03,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 06:11:03,825 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:03,825 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179063825"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179063825"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179063825"}]},"ts":"1690179063825"} 2023-07-24 06:11:03,827 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:03,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 06:11:03,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f7fdbbdd2ac0a663780a488a70ff77f3, NAME => 'unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:03,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:03,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,985 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,987 DEBUG [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/ut 2023-07-24 06:11:03,987 DEBUG [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/ut 2023-07-24 06:11:03,988 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f7fdbbdd2ac0a663780a488a70ff77f3 columnFamilyName ut 2023-07-24 06:11:03,988 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] regionserver.HStore(310): Store=f7fdbbdd2ac0a663780a488a70ff77f3/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:03,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:03,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:03,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f7fdbbdd2ac0a663780a488a70ff77f3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10126334880, jitterRate=-0.05691157281398773}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:03,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:03,997 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3., pid=116, masterSystemTime=1690179063978 2023-07-24 06:11:03,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,998 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:03,999 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:03,999 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179063998"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179063998"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179063998"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179063998"}]},"ts":"1690179063998"} 2023-07-24 06:11:04,002 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-24 06:11:04,002 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,40449,1690179042726 in 173 msec 2023-07-24 06:11:04,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-24 06:11:04,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, ASSIGN in 333 msec 2023-07-24 06:11:04,004 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:04,004 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179064004"}]},"ts":"1690179064004"} 2023-07-24 06:11:04,006 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-24 06:11:04,009 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:04,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=unmovedTable in 407 msec 2023-07-24 06:11:04,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 06:11:04,210 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 114 completed 2023-07-24 06:11:04,210 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-24 06:11:04,211 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:04,214 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-24 06:11:04,215 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:04,215 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-24 06:11:04,217 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-24 06:11:04,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 06:11:04,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:04,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:04,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:04,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:04,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-24 06:11:04,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region f7fdbbdd2ac0a663780a488a70ff77f3 to RSGroup normal 2023-07-24 06:11:04,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE 2023-07-24 06:11:04,223 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-24 06:11:04,223 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE 2023-07-24 06:11:04,224 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:04,225 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179064224"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179064224"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179064224"}]},"ts":"1690179064224"} 2023-07-24 06:11:04,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:04,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f7fdbbdd2ac0a663780a488a70ff77f3, disabling compactions & flushes 2023-07-24 06:11:04,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. after waiting 0 ms 2023-07-24 06:11:04,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:04,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:04,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f7fdbbdd2ac0a663780a488a70ff77f3 move to jenkins-hbase4.apache.org,38203,1690179042473 record at close sequenceid=2 2023-07-24 06:11:04,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,387 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=CLOSED 2023-07-24 06:11:04,388 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179064387"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179064387"}]},"ts":"1690179064387"} 2023-07-24 06:11:04,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-24 06:11:04,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,40449,1690179042726 in 163 msec 2023-07-24 06:11:04,391 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:11:04,541 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:04,542 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179064541"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179064541"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179064541"}]},"ts":"1690179064541"} 2023-07-24 06:11:04,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:04,704 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f7fdbbdd2ac0a663780a488a70ff77f3, NAME => 'unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:04,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:04,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,707 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,708 DEBUG [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/ut 2023-07-24 06:11:04,708 DEBUG [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/ut 2023-07-24 06:11:04,708 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f7fdbbdd2ac0a663780a488a70ff77f3 columnFamilyName ut 2023-07-24 06:11:04,709 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] regionserver.HStore(310): Store=f7fdbbdd2ac0a663780a488a70ff77f3/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:04,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:04,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f7fdbbdd2ac0a663780a488a70ff77f3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10915957280, jitterRate=0.01662774384021759}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:04,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:04,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3., pid=119, masterSystemTime=1690179064695 2023-07-24 06:11:04,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:04,720 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:04,721 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179064720"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179064720"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179064720"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179064720"}]},"ts":"1690179064720"} 2023-07-24 06:11:04,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-24 06:11:04,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,38203,1690179042473 in 179 msec 2023-07-24 06:11:04,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE in 502 msec 2023-07-24 06:11:04,944 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-24 06:11:05,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-24 06:11:05,224 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-24 06:11:05,224 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:05,228 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:05,228 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:05,231 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:05,232 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 06:11:05,232 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:05,232 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 06:11:05,232 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:05,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 06:11:05,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:05,234 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-24 06:11:05,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:05,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:05,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:05,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:05,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-24 06:11:05,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-24 06:11:05,244 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:05,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:05,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-24 06:11:05,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:05,250 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 06:11:05,250 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:05,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 06:11:05,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:05,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:05,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:05,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-24 06:11:05,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:05,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:05,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:05,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:05,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:05,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-24 06:11:05,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region f7fdbbdd2ac0a663780a488a70ff77f3 to RSGroup default 2023-07-24 06:11:05,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE 2023-07-24 06:11:05,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 06:11:05,265 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE 2023-07-24 06:11:05,266 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:05,266 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179065266"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179065266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179065266"}]},"ts":"1690179065266"} 2023-07-24 06:11:05,267 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:05,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f7fdbbdd2ac0a663780a488a70ff77f3, disabling compactions & flushes 2023-07-24 06:11:05,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. after waiting 0 ms 2023-07-24 06:11:05,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:11:05,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:05,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f7fdbbdd2ac0a663780a488a70ff77f3 move to jenkins-hbase4.apache.org,40449,1690179042726 record at close sequenceid=5 2023-07-24 06:11:05,429 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,429 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=CLOSED 2023-07-24 06:11:05,429 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179065429"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179065429"}]},"ts":"1690179065429"} 2023-07-24 06:11:05,436 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-24 06:11:05,436 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,38203,1690179042473 in 167 msec 2023-07-24 06:11:05,437 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:11:05,587 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:05,587 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179065587"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179065587"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179065587"}]},"ts":"1690179065587"} 2023-07-24 06:11:05,589 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:05,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f7fdbbdd2ac0a663780a488a70ff77f3, NAME => 'unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:05,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:05,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,747 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,748 DEBUG [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/ut 2023-07-24 06:11:05,749 DEBUG [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/ut 2023-07-24 06:11:05,749 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f7fdbbdd2ac0a663780a488a70ff77f3 columnFamilyName ut 2023-07-24 06:11:05,750 INFO [StoreOpener-f7fdbbdd2ac0a663780a488a70ff77f3-1] regionserver.HStore(310): Store=f7fdbbdd2ac0a663780a488a70ff77f3/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:05,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:05,757 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f7fdbbdd2ac0a663780a488a70ff77f3; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10775676800, jitterRate=0.003563106060028076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:05,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:05,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3., pid=122, masterSystemTime=1690179065741 2023-07-24 06:11:05,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,760 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:05,760 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f7fdbbdd2ac0a663780a488a70ff77f3, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:05,760 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690179065760"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179065760"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179065760"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179065760"}]},"ts":"1690179065760"} 2023-07-24 06:11:05,763 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-24 06:11:05,763 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure f7fdbbdd2ac0a663780a488a70ff77f3, server=jenkins-hbase4.apache.org,40449,1690179042726 in 172 msec 2023-07-24 06:11:05,764 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=f7fdbbdd2ac0a663780a488a70ff77f3, REOPEN/MOVE in 499 msec 2023-07-24 06:11:06,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-24 06:11:06,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-24 06:11:06,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:06,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38203] to rsgroup default 2023-07-24 06:11:06,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 06:11:06,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:06,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:06,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:06,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:06,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-24 06:11:06,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38203,1690179042473] are moved back to normal 2023-07-24 06:11:06,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-24 06:11:06,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:06,277 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-24 06:11:06,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:06,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:06,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:06,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 06:11:06,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:06,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:06,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:06,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:06,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:06,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:06,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:06,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:06,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:06,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:11:06,297 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:06,300 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-24 06:11:06,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:06,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:06,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:06,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-24 06:11:06,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(345): Moving region 2370e0157d921fd1b13ab1255ffb9e5f to RSGroup default 2023-07-24 06:11:06,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE 2023-07-24 06:11:06,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 06:11:06,306 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE 2023-07-24 06:11:06,308 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:06,308 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179066308"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179066308"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179066308"}]},"ts":"1690179066308"} 2023-07-24 06:11:06,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,34793,1690179046626}] 2023-07-24 06:11:06,463 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2370e0157d921fd1b13ab1255ffb9e5f, disabling compactions & flushes 2023-07-24 06:11:06,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. after waiting 0 ms 2023-07-24 06:11:06,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 06:11:06,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:06,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2370e0157d921fd1b13ab1255ffb9e5f move to jenkins-hbase4.apache.org,38203,1690179042473 record at close sequenceid=5 2023-07-24 06:11:06,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,490 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=CLOSED 2023-07-24 06:11:06,490 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179066489"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179066489"}]},"ts":"1690179066489"} 2023-07-24 06:11:06,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-24 06:11:06,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,34793,1690179046626 in 190 msec 2023-07-24 06:11:06,502 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:11:06,569 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 06:11:06,652 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:06,653 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:06,653 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179066653"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179066653"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179066653"}]},"ts":"1690179066653"} 2023-07-24 06:11:06,655 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:06,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2370e0157d921fd1b13ab1255ffb9e5f, NAME => 'testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:06,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:06,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,814 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,815 DEBUG [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/tr 2023-07-24 06:11:06,815 DEBUG [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/tr 2023-07-24 06:11:06,816 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2370e0157d921fd1b13ab1255ffb9e5f columnFamilyName tr 2023-07-24 06:11:06,816 INFO [StoreOpener-2370e0157d921fd1b13ab1255ffb9e5f-1] regionserver.HStore(310): Store=2370e0157d921fd1b13ab1255ffb9e5f/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:06,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:06,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2370e0157d921fd1b13ab1255ffb9e5f; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10408197920, jitterRate=-0.03066103160381317}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:06,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:06,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f., pid=125, masterSystemTime=1690179066808 2023-07-24 06:11:06,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,824 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:06,824 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2370e0157d921fd1b13ab1255ffb9e5f, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:06,825 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690179066824"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179066824"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179066824"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179066824"}]},"ts":"1690179066824"} 2023-07-24 06:11:06,827 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-24 06:11:06,827 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 2370e0157d921fd1b13ab1255ffb9e5f, server=jenkins-hbase4.apache.org,38203,1690179042473 in 171 msec 2023-07-24 06:11:06,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2370e0157d921fd1b13ab1255ffb9e5f, REOPEN/MOVE in 522 msec 2023-07-24 06:11:07,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-24 06:11:07,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-24 06:11:07,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:07,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:11:07,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 06:11:07,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:07,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-24 06:11:07,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to newgroup 2023-07-24 06:11:07,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-24 06:11:07,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:07,314 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-24 06:11:07,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:07,320 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:07,323 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:07,324 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:07,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:07,337 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:07,340 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,340 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,342 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:07,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 761 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180267342, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:07,343 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:07,344 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:07,345 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,345 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,345 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:07,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:07,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,363 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=496 (was 503), OpenFileDescriptor=745 (was 749), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=352 (was 374), ProcessCount=175 (was 177), AvailableMemoryMB=8140 (was 6200) - AvailableMemoryMB LEAK? - 2023-07-24 06:11:07,379 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=496, OpenFileDescriptor=745, MaxFileDescriptor=60000, SystemLoadAverage=352, ProcessCount=175, AvailableMemoryMB=8139 2023-07-24 06:11:07,379 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-24 06:11:07,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:07,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:07,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:07,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:07,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:07,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:07,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:07,390 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:07,392 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:07,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:07,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:07,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:07,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:07,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 789 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180267402, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:07,402 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:07,404 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:07,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,405 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:07,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:07,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-24 06:11:07,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:07,412 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-24 06:11:07,412 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-24 06:11:07,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-24 06:11:07,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-24 06:11:07,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 801 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:53912 deadline: 1690180267413, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-24 06:11:07,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-24 06:11:07,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:53912 deadline: 1690180267415, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 06:11:07,418 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 06:11:07,418 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-24 06:11:07,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-24 06:11:07,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:53912 deadline: 1690180267422, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 06:11:07,427 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,427 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:07,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:07,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:07,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:07,428 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:07,429 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:07,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:07,438 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:07,442 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:07,442 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:07,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:07,449 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:07,456 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,456 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:07,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 832 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180267459, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:07,462 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:07,464 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:07,464 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,465 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,465 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:07,465 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:07,465 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,482 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=499 (was 496) Potentially hanging thread: hconnection-0x369a3209-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x369a3209-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=742 (was 745), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=352 (was 352), ProcessCount=175 (was 175), AvailableMemoryMB=8140 (was 8139) - AvailableMemoryMB LEAK? - 2023-07-24 06:11:07,503 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=499, OpenFileDescriptor=742, MaxFileDescriptor=60000, SystemLoadAverage=352, ProcessCount=175, AvailableMemoryMB=8140 2023-07-24 06:11:07,503 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-24 06:11:07,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,508 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:07,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:07,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:07,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:07,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:07,510 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:07,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:07,515 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:07,517 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:07,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:07,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:07,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:07,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,543 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:07,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:07,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 860 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180267543, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:07,544 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:07,546 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:07,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,548 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:07,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:07,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:07,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:07,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:07,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,568 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:07,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 06:11:07,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to default 2023-07-24 06:11:07,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:07,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:07,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:07,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:07,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:07,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:07,589 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:07,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 126 2023-07-24 06:11:07,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 06:11:07,593 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1712646620 2023-07-24 06:11:07,594 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:07,594 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:07,595 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:07,597 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:07,604 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:07,604 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:07,604 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:07,604 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:07,604 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:07,605 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 empty. 2023-07-24 06:11:07,605 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 empty. 2023-07-24 06:11:07,605 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef empty. 2023-07-24 06:11:07,605 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 empty. 2023-07-24 06:11:07,606 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d empty. 2023-07-24 06:11:07,606 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:07,607 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:07,607 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:07,607 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:07,608 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:07,608 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 06:11:07,637 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:07,638 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => f19ff21ab96e503de37b75392d507964, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:07,639 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 35dc8ae44f66e3cab689719268e6a810, NAME => 'Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:07,639 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => f21ebe1c2bb7e10b728959f64a217721, NAME => 'Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:07,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:07,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing f19ff21ab96e503de37b75392d507964, disabling compactions & flushes 2023-07-24 06:11:07,671 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:07,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:07,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. after waiting 0 ms 2023-07-24 06:11:07,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:07,671 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:07,671 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for f19ff21ab96e503de37b75392d507964: 2023-07-24 06:11:07,672 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => b13b20d94b5e9816c18365c7782875ef, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:07,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:07,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing f21ebe1c2bb7e10b728959f64a217721, disabling compactions & flushes 2023-07-24 06:11:07,673 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:07,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:07,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. after waiting 0 ms 2023-07-24 06:11:07,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:07,673 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:07,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for f21ebe1c2bb7e10b728959f64a217721: 2023-07-24 06:11:07,675 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0288980aebc6448555a2c0b507066a3d, NAME => 'Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp 2023-07-24 06:11:07,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:07,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing b13b20d94b5e9816c18365c7782875ef, disabling compactions & flushes 2023-07-24 06:11:07,687 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:07,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:07,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. after waiting 0 ms 2023-07-24 06:11:07,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:07,687 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:07,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for b13b20d94b5e9816c18365c7782875ef: 2023-07-24 06:11:07,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 06:11:07,693 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:07,694 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 0288980aebc6448555a2c0b507066a3d, disabling compactions & flushes 2023-07-24 06:11:07,694 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:07,694 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:07,694 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. after waiting 0 ms 2023-07-24 06:11:07,694 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:07,694 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:07,694 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 0288980aebc6448555a2c0b507066a3d: 2023-07-24 06:11:07,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 06:11:08,071 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:08,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 35dc8ae44f66e3cab689719268e6a810, disabling compactions & flushes 2023-07-24 06:11:08,072 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. after waiting 0 ms 2023-07-24 06:11:08,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,072 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 35dc8ae44f66e3cab689719268e6a810: 2023-07-24 06:11:08,075 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:08,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068075"}]},"ts":"1690179068075"} 2023-07-24 06:11:08,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068075"}]},"ts":"1690179068075"} 2023-07-24 06:11:08,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068075"}]},"ts":"1690179068075"} 2023-07-24 06:11:08,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068075"}]},"ts":"1690179068075"} 2023-07-24 06:11:08,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068075"}]},"ts":"1690179068075"} 2023-07-24 06:11:08,078 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 06:11:08,079 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:08,079 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179068079"}]},"ts":"1690179068079"} 2023-07-24 06:11:08,080 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-24 06:11:08,089 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:08,089 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:08,089 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:08,089 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:08,089 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, ASSIGN}, {pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, ASSIGN}, {pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, ASSIGN}, {pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, ASSIGN}, {pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, ASSIGN}] 2023-07-24 06:11:08,091 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, ASSIGN 2023-07-24 06:11:08,091 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, ASSIGN 2023-07-24 06:11:08,092 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, ASSIGN 2023-07-24 06:11:08,092 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, ASSIGN 2023-07-24 06:11:08,092 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:11:08,092 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:11:08,092 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40449,1690179042726; forceNewPlan=false, retain=false 2023-07-24 06:11:08,092 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:11:08,092 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, ASSIGN 2023-07-24 06:11:08,093 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38203,1690179042473; forceNewPlan=false, retain=false 2023-07-24 06:11:08,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 06:11:08,243 INFO [jenkins-hbase4:39303] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 06:11:08,246 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=0288980aebc6448555a2c0b507066a3d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:08,246 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f19ff21ab96e503de37b75392d507964, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,246 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=35dc8ae44f66e3cab689719268e6a810, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,247 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068246"}]},"ts":"1690179068246"} 2023-07-24 06:11:08,247 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068246"}]},"ts":"1690179068246"} 2023-07-24 06:11:08,246 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=f21ebe1c2bb7e10b728959f64a217721, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:08,246 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=b13b20d94b5e9816c18365c7782875ef, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,247 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068246"}]},"ts":"1690179068246"} 2023-07-24 06:11:08,247 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068246"}]},"ts":"1690179068246"} 2023-07-24 06:11:08,247 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068246"}]},"ts":"1690179068246"} 2023-07-24 06:11:08,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=129, state=RUNNABLE; OpenRegionProcedure f19ff21ab96e503de37b75392d507964, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:08,249 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=128, state=RUNNABLE; OpenRegionProcedure 35dc8ae44f66e3cab689719268e6a810, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:08,250 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=127, state=RUNNABLE; OpenRegionProcedure f21ebe1c2bb7e10b728959f64a217721, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:08,251 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=131, state=RUNNABLE; OpenRegionProcedure 0288980aebc6448555a2c0b507066a3d, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:08,252 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=130, state=RUNNABLE; OpenRegionProcedure b13b20d94b5e9816c18365c7782875ef, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:08,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 35dc8ae44f66e3cab689719268e6a810, NAME => 'Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 06:11:08,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:08,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0288980aebc6448555a2c0b507066a3d, NAME => 'Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 06:11:08,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:08,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,407 INFO [StoreOpener-35dc8ae44f66e3cab689719268e6a810-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,408 DEBUG [StoreOpener-35dc8ae44f66e3cab689719268e6a810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/f 2023-07-24 06:11:08,408 DEBUG [StoreOpener-35dc8ae44f66e3cab689719268e6a810-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/f 2023-07-24 06:11:08,409 INFO [StoreOpener-35dc8ae44f66e3cab689719268e6a810-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 35dc8ae44f66e3cab689719268e6a810 columnFamilyName f 2023-07-24 06:11:08,409 INFO [StoreOpener-35dc8ae44f66e3cab689719268e6a810-1] regionserver.HStore(310): Store=35dc8ae44f66e3cab689719268e6a810/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:08,411 INFO [StoreOpener-0288980aebc6448555a2c0b507066a3d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,412 DEBUG [StoreOpener-0288980aebc6448555a2c0b507066a3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/f 2023-07-24 06:11:08,412 DEBUG [StoreOpener-0288980aebc6448555a2c0b507066a3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/f 2023-07-24 06:11:08,413 INFO [StoreOpener-0288980aebc6448555a2c0b507066a3d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0288980aebc6448555a2c0b507066a3d columnFamilyName f 2023-07-24 06:11:08,413 INFO [StoreOpener-0288980aebc6448555a2c0b507066a3d-1] regionserver.HStore(310): Store=0288980aebc6448555a2c0b507066a3d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:08,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:08,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 35dc8ae44f66e3cab689719268e6a810; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11095604160, jitterRate=0.03335866332054138}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:08,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 35dc8ae44f66e3cab689719268e6a810: 2023-07-24 06:11:08,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810., pid=133, masterSystemTime=1690179068400 2023-07-24 06:11:08,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:08,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0288980aebc6448555a2c0b507066a3d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10072665280, jitterRate=-0.06190994381904602}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:08,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0288980aebc6448555a2c0b507066a3d: 2023-07-24 06:11:08,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,422 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=35dc8ae44f66e3cab689719268e6a810, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f19ff21ab96e503de37b75392d507964, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 06:11:08,422 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068422"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179068422"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179068422"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179068422"}]},"ts":"1690179068422"} 2023-07-24 06:11:08,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:08,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d., pid=135, masterSystemTime=1690179068403 2023-07-24 06:11:08,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,424 INFO [StoreOpener-f19ff21ab96e503de37b75392d507964-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f21ebe1c2bb7e10b728959f64a217721, NAME => 'Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 06:11:08,424 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=0288980aebc6448555a2c0b507066a3d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:08,425 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068424"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179068424"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179068424"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179068424"}]},"ts":"1690179068424"} 2023-07-24 06:11:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,426 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=128 2023-07-24 06:11:08,426 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=128, state=SUCCESS; OpenRegionProcedure 35dc8ae44f66e3cab689719268e6a810, server=jenkins-hbase4.apache.org,38203,1690179042473 in 175 msec 2023-07-24 06:11:08,426 DEBUG [StoreOpener-f19ff21ab96e503de37b75392d507964-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/f 2023-07-24 06:11:08,427 DEBUG [StoreOpener-f19ff21ab96e503de37b75392d507964-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/f 2023-07-24 06:11:08,427 INFO [StoreOpener-f21ebe1c2bb7e10b728959f64a217721-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,427 INFO [StoreOpener-f19ff21ab96e503de37b75392d507964-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f19ff21ab96e503de37b75392d507964 columnFamilyName f 2023-07-24 06:11:08,428 INFO [StoreOpener-f19ff21ab96e503de37b75392d507964-1] regionserver.HStore(310): Store=f19ff21ab96e503de37b75392d507964/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:08,429 DEBUG [StoreOpener-f21ebe1c2bb7e10b728959f64a217721-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/f 2023-07-24 06:11:08,429 DEBUG [StoreOpener-f21ebe1c2bb7e10b728959f64a217721-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/f 2023-07-24 06:11:08,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,429 INFO [StoreOpener-f21ebe1c2bb7e10b728959f64a217721-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f21ebe1c2bb7e10b728959f64a217721 columnFamilyName f 2023-07-24 06:11:08,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,429 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, ASSIGN in 337 msec 2023-07-24 06:11:08,430 INFO [StoreOpener-f21ebe1c2bb7e10b728959f64a217721-1] regionserver.HStore(310): Store=f21ebe1c2bb7e10b728959f64a217721/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:08,430 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=131 2023-07-24 06:11:08,430 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=131, state=SUCCESS; OpenRegionProcedure 0288980aebc6448555a2c0b507066a3d, server=jenkins-hbase4.apache.org,40449,1690179042726 in 175 msec 2023-07-24 06:11:08,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,431 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, ASSIGN in 341 msec 2023-07-24 06:11:08,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:08,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f19ff21ab96e503de37b75392d507964; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11944719680, jitterRate=0.11243870854377747}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:08,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f19ff21ab96e503de37b75392d507964: 2023-07-24 06:11:08,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964., pid=132, masterSystemTime=1690179068400 2023-07-24 06:11:08,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:08,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f21ebe1c2bb7e10b728959f64a217721; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11814694240, jitterRate=0.10032914578914642}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:08,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f21ebe1c2bb7e10b728959f64a217721: 2023-07-24 06:11:08,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b13b20d94b5e9816c18365c7782875ef, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 06:11:08,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721., pid=134, masterSystemTime=1690179068403 2023-07-24 06:11:08,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=f19ff21ab96e503de37b75392d507964, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:08,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068442"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179068442"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179068442"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179068442"}]},"ts":"1690179068442"} 2023-07-24 06:11:08,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,444 INFO [StoreOpener-b13b20d94b5e9816c18365c7782875ef-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,444 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=f21ebe1c2bb7e10b728959f64a217721, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:08,444 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179068444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179068444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179068444"}]},"ts":"1690179068444"} 2023-07-24 06:11:08,446 DEBUG [StoreOpener-b13b20d94b5e9816c18365c7782875ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/f 2023-07-24 06:11:08,446 DEBUG [StoreOpener-b13b20d94b5e9816c18365c7782875ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/f 2023-07-24 06:11:08,446 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=129 2023-07-24 06:11:08,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; OpenRegionProcedure f19ff21ab96e503de37b75392d507964, server=jenkins-hbase4.apache.org,38203,1690179042473 in 196 msec 2023-07-24 06:11:08,447 INFO [StoreOpener-b13b20d94b5e9816c18365c7782875ef-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b13b20d94b5e9816c18365c7782875ef columnFamilyName f 2023-07-24 06:11:08,448 INFO [StoreOpener-b13b20d94b5e9816c18365c7782875ef-1] regionserver.HStore(310): Store=b13b20d94b5e9816c18365c7782875ef/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:08,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, ASSIGN in 357 msec 2023-07-24 06:11:08,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,449 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=127 2023-07-24 06:11:08,449 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=127, state=SUCCESS; OpenRegionProcedure f21ebe1c2bb7e10b728959f64a217721, server=jenkins-hbase4.apache.org,40449,1690179042726 in 196 msec 2023-07-24 06:11:08,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,450 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, ASSIGN in 360 msec 2023-07-24 06:11:08,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:08,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b13b20d94b5e9816c18365c7782875ef; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9589958400, jitterRate=-0.1068655252456665}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:08,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b13b20d94b5e9816c18365c7782875ef: 2023-07-24 06:11:08,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef., pid=136, masterSystemTime=1690179068400 2023-07-24 06:11:08,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,458 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=b13b20d94b5e9816c18365c7782875ef, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,458 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068458"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179068458"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179068458"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179068458"}]},"ts":"1690179068458"} 2023-07-24 06:11:08,461 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=130 2023-07-24 06:11:08,461 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=130, state=SUCCESS; OpenRegionProcedure b13b20d94b5e9816c18365c7782875ef, server=jenkins-hbase4.apache.org,38203,1690179042473 in 207 msec 2023-07-24 06:11:08,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=126 2023-07-24 06:11:08,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, ASSIGN in 372 msec 2023-07-24 06:11:08,463 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:08,463 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179068463"}]},"ts":"1690179068463"} 2023-07-24 06:11:08,464 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-24 06:11:08,466 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:08,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 880 msec 2023-07-24 06:11:08,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-24 06:11:08,697 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 126 completed 2023-07-24 06:11:08,697 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-24 06:11:08,698 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:08,701 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-24 06:11:08,702 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:08,702 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-24 06:11:08,702 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:08,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 06:11:08,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:08,710 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 06:11:08,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 06:11:08,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=137, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:08,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-24 06:11:08,714 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179068714"}]},"ts":"1690179068714"} 2023-07-24 06:11:08,715 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-24 06:11:08,717 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-24 06:11:08,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, UNASSIGN}, {pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, UNASSIGN}, {pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, UNASSIGN}, {pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, UNASSIGN}, {pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, UNASSIGN}] 2023-07-24 06:11:08,720 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, UNASSIGN 2023-07-24 06:11:08,720 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, UNASSIGN 2023-07-24 06:11:08,720 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, UNASSIGN 2023-07-24 06:11:08,720 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, UNASSIGN 2023-07-24 06:11:08,720 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, UNASSIGN 2023-07-24 06:11:08,721 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=35dc8ae44f66e3cab689719268e6a810, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,721 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=f21ebe1c2bb7e10b728959f64a217721, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:08,721 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=f19ff21ab96e503de37b75392d507964, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,721 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068721"}]},"ts":"1690179068721"} 2023-07-24 06:11:08,721 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068721"}]},"ts":"1690179068721"} 2023-07-24 06:11:08,721 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068721"}]},"ts":"1690179068721"} 2023-07-24 06:11:08,722 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=0288980aebc6448555a2c0b507066a3d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:08,722 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=b13b20d94b5e9816c18365c7782875ef, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:08,722 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068721"}]},"ts":"1690179068721"} 2023-07-24 06:11:08,722 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179068721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179068721"}]},"ts":"1690179068721"} 2023-07-24 06:11:08,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; CloseRegionProcedure f21ebe1c2bb7e10b728959f64a217721, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:08,723 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=140, state=RUNNABLE; CloseRegionProcedure f19ff21ab96e503de37b75392d507964, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:08,724 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=139, state=RUNNABLE; CloseRegionProcedure 35dc8ae44f66e3cab689719268e6a810, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:08,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=142, state=RUNNABLE; CloseRegionProcedure 0288980aebc6448555a2c0b507066a3d, server=jenkins-hbase4.apache.org,40449,1690179042726}] 2023-07-24 06:11:08,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; CloseRegionProcedure b13b20d94b5e9816c18365c7782875ef, server=jenkins-hbase4.apache.org,38203,1690179042473}] 2023-07-24 06:11:08,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-24 06:11:08,868 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-24 06:11:08,869 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-24 06:11:08,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f21ebe1c2bb7e10b728959f64a217721, disabling compactions & flushes 2023-07-24 06:11:08,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. after waiting 0 ms 2023-07-24 06:11:08,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b13b20d94b5e9816c18365c7782875ef, disabling compactions & flushes 2023-07-24 06:11:08,879 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. after waiting 0 ms 2023-07-24 06:11:08,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:08,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:08,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721. 2023-07-24 06:11:08,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f21ebe1c2bb7e10b728959f64a217721: 2023-07-24 06:11:08,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef. 2023-07-24 06:11:08,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b13b20d94b5e9816c18365c7782875ef: 2023-07-24 06:11:08,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:08,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0288980aebc6448555a2c0b507066a3d, disabling compactions & flushes 2023-07-24 06:11:08,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. after waiting 0 ms 2023-07-24 06:11:08,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,902 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=f21ebe1c2bb7e10b728959f64a217721, regionState=CLOSED 2023-07-24 06:11:08,902 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068902"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068902"}]},"ts":"1690179068902"} 2023-07-24 06:11:08,904 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=b13b20d94b5e9816c18365c7782875ef, regionState=CLOSED 2023-07-24 06:11:08,904 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068904"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068904"}]},"ts":"1690179068904"} 2023-07-24 06:11:08,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:08,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d. 2023-07-24 06:11:08,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0288980aebc6448555a2c0b507066a3d: 2023-07-24 06:11:08,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:08,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 35dc8ae44f66e3cab689719268e6a810, disabling compactions & flushes 2023-07-24 06:11:08,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. after waiting 0 ms 2023-07-24 06:11:08,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:08,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-24 06:11:08,922 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=0288980aebc6448555a2c0b507066a3d, regionState=CLOSED 2023-07-24 06:11:08,922 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; CloseRegionProcedure b13b20d94b5e9816c18365c7782875ef, server=jenkins-hbase4.apache.org,38203,1690179042473 in 180 msec 2023-07-24 06:11:08,922 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690179068921"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068921"}]},"ts":"1690179068921"} 2023-07-24 06:11:08,921 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-24 06:11:08,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; CloseRegionProcedure f21ebe1c2bb7e10b728959f64a217721, server=jenkins-hbase4.apache.org,40449,1690179042726 in 183 msec 2023-07-24 06:11:08,924 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f21ebe1c2bb7e10b728959f64a217721, UNASSIGN in 204 msec 2023-07-24 06:11:08,924 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b13b20d94b5e9816c18365c7782875ef, UNASSIGN in 204 msec 2023-07-24 06:11:08,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=142 2023-07-24 06:11:08,926 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=142, state=SUCCESS; CloseRegionProcedure 0288980aebc6448555a2c0b507066a3d, server=jenkins-hbase4.apache.org,40449,1690179042726 in 198 msec 2023-07-24 06:11:08,927 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0288980aebc6448555a2c0b507066a3d, UNASSIGN in 208 msec 2023-07-24 06:11:08,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:08,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810. 2023-07-24 06:11:08,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 35dc8ae44f66e3cab689719268e6a810: 2023-07-24 06:11:08,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:08,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f19ff21ab96e503de37b75392d507964, disabling compactions & flushes 2023-07-24 06:11:08,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,931 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=35dc8ae44f66e3cab689719268e6a810, regionState=CLOSED 2023-07-24 06:11:08,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. after waiting 0 ms 2023-07-24 06:11:08,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,931 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068931"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068931"}]},"ts":"1690179068931"} 2023-07-24 06:11:08,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=139 2023-07-24 06:11:08,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=139, state=SUCCESS; CloseRegionProcedure 35dc8ae44f66e3cab689719268e6a810, server=jenkins-hbase4.apache.org,38203,1690179042473 in 208 msec 2023-07-24 06:11:08,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:08,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964. 2023-07-24 06:11:08,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f19ff21ab96e503de37b75392d507964: 2023-07-24 06:11:08,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=35dc8ae44f66e3cab689719268e6a810, UNASSIGN in 217 msec 2023-07-24 06:11:08,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:08,939 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=f19ff21ab96e503de37b75392d507964, regionState=CLOSED 2023-07-24 06:11:08,939 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690179068939"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179068939"}]},"ts":"1690179068939"} 2023-07-24 06:11:08,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=140 2023-07-24 06:11:08,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; CloseRegionProcedure f19ff21ab96e503de37b75392d507964, server=jenkins-hbase4.apache.org,38203,1690179042473 in 217 msec 2023-07-24 06:11:08,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-24 06:11:08,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f19ff21ab96e503de37b75392d507964, UNASSIGN in 224 msec 2023-07-24 06:11:08,945 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179068945"}]},"ts":"1690179068945"} 2023-07-24 06:11:08,947 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-24 06:11:08,948 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-24 06:11:08,956 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 245 msec 2023-07-24 06:11:09,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-24 06:11:09,018 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 137 completed 2023-07-24 06:11:09,018 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:09,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:09,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-24 06:11:09,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1712646620, current retry=0 2023-07-24 06:11:09,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1712646620. 2023-07-24 06:11:09,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:09,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 06:11:09,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:09,034 INFO [Listener at localhost/46655] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 06:11:09,034 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 06:11:09,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:09,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:53912 deadline: 1690179129034, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-24 06:11:09,036 DEBUG [Listener at localhost/46655] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-24 06:11:09,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-24 06:11:09,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:09,040 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:09,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1712646620' 2023-07-24 06:11:09,041 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=149, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:09,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:09,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:09,049 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:09,049 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:09,049 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:09,050 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:09,050 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:09,053 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/recovered.edits] 2023-07-24 06:11:09,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-24 06:11:09,054 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/recovered.edits] 2023-07-24 06:11:09,054 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/recovered.edits] 2023-07-24 06:11:09,054 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/recovered.edits] 2023-07-24 06:11:09,056 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/f, FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/recovered.edits] 2023-07-24 06:11:09,068 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721/recovered.edits/4.seqid 2023-07-24 06:11:09,068 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d/recovered.edits/4.seqid 2023-07-24 06:11:09,068 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef/recovered.edits/4.seqid 2023-07-24 06:11:09,069 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964/recovered.edits/4.seqid 2023-07-24 06:11:09,069 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/recovered.edits/4.seqid to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/archive/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810/recovered.edits/4.seqid 2023-07-24 06:11:09,069 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f21ebe1c2bb7e10b728959f64a217721 2023-07-24 06:11:09,070 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/0288980aebc6448555a2c0b507066a3d 2023-07-24 06:11:09,070 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/35dc8ae44f66e3cab689719268e6a810 2023-07-24 06:11:09,071 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/f19ff21ab96e503de37b75392d507964 2023-07-24 06:11:09,071 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/.tmp/data/default/Group_testDisabledTableMove/b13b20d94b5e9816c18365c7782875ef 2023-07-24 06:11:09,071 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 06:11:09,074 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=149, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:09,077 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-24 06:11:09,082 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-24 06:11:09,083 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=149, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:09,084 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-24 06:11:09,084 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179069084"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:09,084 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179069084"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:09,084 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179069084"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:09,084 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179069084"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:09,084 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179069084"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:09,085 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 06:11:09,086 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f21ebe1c2bb7e10b728959f64a217721, NAME => 'Group_testDisabledTableMove,,1690179067585.f21ebe1c2bb7e10b728959f64a217721.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 35dc8ae44f66e3cab689719268e6a810, NAME => 'Group_testDisabledTableMove,aaaaa,1690179067585.35dc8ae44f66e3cab689719268e6a810.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f19ff21ab96e503de37b75392d507964, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690179067585.f19ff21ab96e503de37b75392d507964.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b13b20d94b5e9816c18365c7782875ef, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690179067585.b13b20d94b5e9816c18365c7782875ef.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 0288980aebc6448555a2c0b507066a3d, NAME => 'Group_testDisabledTableMove,zzzzz,1690179067585.0288980aebc6448555a2c0b507066a3d.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 06:11:09,086 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-24 06:11:09,086 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179069086"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:09,087 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-24 06:11:09,089 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=149, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 06:11:09,090 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 52 msec 2023-07-24 06:11:09,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-24 06:11:09,155 INFO [Listener at localhost/46655] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-24 06:11:09,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,159 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:09,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:09,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:09,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:34793] to rsgroup default 2023-07-24 06:11:09,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:09,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:09,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1712646620, current retry=0 2023-07-24 06:11:09,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34793,1690179046626, jenkins-hbase4.apache.org,37173,1690179042942] are moved back to Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1712646620 => default 2023-07-24 06:11:09,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:09,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1712646620 2023-07-24 06:11:09,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:09,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:11:09,174 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:09,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:09,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:09,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:09,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:09,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:09,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:09,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:09,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:09,186 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:09,187 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:09,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:09,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:09,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:09,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:09,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:09,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180269197, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:09,198 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:09,199 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:09,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,200 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:09,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:09,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:09,221 INFO [Listener at localhost/46655] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502 (was 499) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-854381483_17 at /127.0.0.1:53180 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-595519289_17 at /127.0.0.1:51028 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63197ba-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2231fec8-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 742) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=380 (was 352) - SystemLoadAverage LEAK? -, ProcessCount=175 (was 175), AvailableMemoryMB=8125 (was 8140) 2023-07-24 06:11:09,222 WARN [Listener at localhost/46655] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-24 06:11:09,241 INFO [Listener at localhost/46655] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=502, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=380, ProcessCount=175, AvailableMemoryMB=8125 2023-07-24 06:11:09,242 WARN [Listener at localhost/46655] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-24 06:11:09,242 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-24 06:11:09,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,246 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,246 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:09,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:09,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:09,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:09,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:09,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:09,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:09,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:09,256 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:09,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:09,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:09,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:09,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:09,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:09,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39303] to rsgroup master 2023-07-24 06:11:09,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:09,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53912 deadline: 1690180269267, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. 2023-07-24 06:11:09,268 WARN [Listener at localhost/46655] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39303 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:09,270 INFO [Listener at localhost/46655] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:09,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:09,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:09,271 INFO [Listener at localhost/46655] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34793, jenkins-hbase4.apache.org:37173, jenkins-hbase4.apache.org:38203, jenkins-hbase4.apache.org:40449], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:09,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:09,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39303] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:09,273 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 06:11:09,273 INFO [Listener at localhost/46655] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 06:11:09,273 DEBUG [Listener at localhost/46655] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x42116de3 to 127.0.0.1:54990 2023-07-24 06:11:09,273 DEBUG [Listener at localhost/46655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,275 DEBUG [Listener at localhost/46655] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 06:11:09,275 DEBUG [Listener at localhost/46655] util.JVMClusterUtil(257): Found active master hash=1742323382, stopped=false 2023-07-24 06:11:09,275 DEBUG [Listener at localhost/46655] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 06:11:09,275 DEBUG [Listener at localhost/46655] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 06:11:09,275 INFO [Listener at localhost/46655] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:11:09,380 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:09,381 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:09,381 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:09,381 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:09,380 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:09,381 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:09,381 INFO [Listener at localhost/46655] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 06:11:09,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:09,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:09,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:09,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:09,382 DEBUG [Listener at localhost/46655] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x680e8523 to 127.0.0.1:54990 2023-07-24 06:11:09,383 DEBUG [Listener at localhost/46655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,383 INFO [Listener at localhost/46655] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38203,1690179042473' ***** 2023-07-24 06:11:09,383 INFO [Listener at localhost/46655] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:09,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:09,383 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:09,383 INFO [Listener at localhost/46655] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40449,1690179042726' ***** 2023-07-24 06:11:09,386 INFO [Listener at localhost/46655] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:09,385 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34793,1690179046626' ***** 2023-07-24 06:11:09,385 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37173,1690179042942' ***** 2023-07-24 06:11:09,390 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-24 06:11:09,390 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-24 06:11:09,390 INFO [Listener at localhost/46655] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37173,1690179042942' ***** 2023-07-24 06:11:09,391 INFO [Listener at localhost/46655] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:09,390 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:09,391 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:09,391 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:09,410 INFO [RS:1;jenkins-hbase4:40449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@61c85c36{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:09,410 INFO [RS:3;jenkins-hbase4:34793] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@289fa920{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:09,410 INFO [RS:0;jenkins-hbase4:38203] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36536101{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:09,410 INFO [RS:2;jenkins-hbase4:37173] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@445c6e68{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:09,416 INFO [RS:1;jenkins-hbase4:40449] server.AbstractConnector(383): Stopped ServerConnector@269a05ea{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:09,416 INFO [RS:3;jenkins-hbase4:34793] server.AbstractConnector(383): Stopped ServerConnector@35efd609{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:09,416 INFO [RS:0;jenkins-hbase4:38203] server.AbstractConnector(383): Stopped ServerConnector@5b42239d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:09,416 INFO [RS:3;jenkins-hbase4:34793] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:09,416 INFO [RS:2;jenkins-hbase4:37173] server.AbstractConnector(383): Stopped ServerConnector@49040dda{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:09,416 INFO [RS:2;jenkins-hbase4:37173] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:09,416 INFO [RS:0;jenkins-hbase4:38203] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:09,416 INFO [RS:1;jenkins-hbase4:40449] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:09,418 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,419 INFO [RS:1;jenkins-hbase4:40449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10237f46{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:09,417 INFO [RS:3;jenkins-hbase4:34793] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34f7812e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:09,420 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,420 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:09,421 INFO [RS:3;jenkins-hbase4:34793] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14ac5f55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:09,420 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:09,420 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,420 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:09,420 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:09,420 INFO [RS:0;jenkins-hbase4:38203] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6289171c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:09,420 INFO [RS:1;jenkins-hbase4:40449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f68fdb3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:09,430 INFO [RS:0;jenkins-hbase4:38203] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21aec1e3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:09,419 INFO [RS:2;jenkins-hbase4:37173] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1946f983{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:09,431 INFO [RS:1;jenkins-hbase4:40449] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:09,425 INFO [RS:3;jenkins-hbase4:34793] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:09,421 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,432 INFO [RS:2;jenkins-hbase4:37173] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a081fb2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:09,432 INFO [RS:3;jenkins-hbase4:34793] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:09,432 INFO [RS:1;jenkins-hbase4:40449] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:09,433 INFO [RS:3;jenkins-hbase4:34793] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:09,433 INFO [RS:1;jenkins-hbase4:40449] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:09,433 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:09,433 DEBUG [RS:3;jenkins-hbase4:34793] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2dd2b676 to 127.0.0.1:54990 2023-07-24 06:11:09,433 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(3305): Received CLOSE for 0aba53baeae40b1c65e437bbd16090b8 2023-07-24 06:11:09,433 INFO [RS:2;jenkins-hbase4:37173] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:09,433 INFO [RS:2;jenkins-hbase4:37173] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:09,433 INFO [RS:2;jenkins-hbase4:37173] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:09,433 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:11:09,434 DEBUG [RS:2;jenkins-hbase4:37173] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x62d1bdd2 to 127.0.0.1:54990 2023-07-24 06:11:09,434 DEBUG [RS:2;jenkins-hbase4:37173] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,434 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37173,1690179042942; all regions closed. 2023-07-24 06:11:09,433 DEBUG [RS:3;jenkins-hbase4:34793] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,434 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(3305): Received CLOSE for f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:09,437 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(3305): Received CLOSE for 383d19758bb15afdbebec46f9d69da35 2023-07-24 06:11:09,437 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:09,437 DEBUG [RS:1;jenkins-hbase4:40449] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0bee5f2a to 127.0.0.1:54990 2023-07-24 06:11:09,437 DEBUG [RS:1;jenkins-hbase4:40449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,437 INFO [RS:1;jenkins-hbase4:40449] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:09,437 INFO [RS:1;jenkins-hbase4:40449] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:09,437 INFO [RS:1;jenkins-hbase4:40449] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:09,437 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 06:11:09,434 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34793,1690179046626; all regions closed. 2023-07-24 06:11:09,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0aba53baeae40b1c65e437bbd16090b8, disabling compactions & flushes 2023-07-24 06:11:09,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:11:09,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:11:09,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. after waiting 0 ms 2023-07-24 06:11:09,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:11:09,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0aba53baeae40b1c65e437bbd16090b8 1/1 column families, dataSize=22.12 KB heapSize=36.49 KB 2023-07-24 06:11:09,438 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 06:11:09,438 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 0aba53baeae40b1c65e437bbd16090b8=hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8., f7fdbbdd2ac0a663780a488a70ff77f3=unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3., 383d19758bb15afdbebec46f9d69da35=hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35.} 2023-07-24 06:11:09,438 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 06:11:09,438 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 06:11:09,439 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 06:11:09,439 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 06:11:09,439 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 06:11:09,439 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.90 KB heapSize=122.84 KB 2023-07-24 06:11:09,439 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1504): Waiting on 0aba53baeae40b1c65e437bbd16090b8, 1588230740, 383d19758bb15afdbebec46f9d69da35, f7fdbbdd2ac0a663780a488a70ff77f3 2023-07-24 06:11:09,459 INFO [RS:0;jenkins-hbase4:38203] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:09,479 INFO [RS:0;jenkins-hbase4:38203] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:09,479 INFO [RS:0;jenkins-hbase4:38203] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:09,479 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(3305): Received CLOSE for 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:09,479 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:09,479 DEBUG [RS:0;jenkins-hbase4:38203] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x014ded73 to 127.0.0.1:54990 2023-07-24 06:11:09,479 DEBUG [RS:0;jenkins-hbase4:38203] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,479 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 06:11:09,479 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1478): Online Regions={2370e0157d921fd1b13ab1255ffb9e5f=testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f.} 2023-07-24 06:11:09,480 DEBUG [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1504): Waiting on 2370e0157d921fd1b13ab1255ffb9e5f 2023-07-24 06:11:09,481 DEBUG [RS:2;jenkins-hbase4:37173] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs 2023-07-24 06:11:09,481 INFO [RS:2;jenkins-hbase4:37173] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37173%2C1690179042942:(num 1690179044984) 2023-07-24 06:11:09,481 DEBUG [RS:2;jenkins-hbase4:37173] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,481 INFO [RS:2;jenkins-hbase4:37173] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2370e0157d921fd1b13ab1255ffb9e5f, disabling compactions & flushes 2023-07-24 06:11:09,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:09,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:09,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. after waiting 0 ms 2023-07-24 06:11:09,488 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:09,490 INFO [RS:2;jenkins-hbase4:37173] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:09,490 INFO [RS:2;jenkins-hbase4:37173] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:09,490 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:09,490 INFO [RS:2;jenkins-hbase4:37173] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:09,494 INFO [RS:2;jenkins-hbase4:37173] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:09,496 INFO [RS:2;jenkins-hbase4:37173] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37173 2023-07-24 06:11:09,504 DEBUG [RS:3;jenkins-hbase4:34793] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs 2023-07-24 06:11:09,505 INFO [RS:3;jenkins-hbase4:34793] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34793%2C1690179046626:(num 1690179047107) 2023-07-24 06:11:09,505 DEBUG [RS:3;jenkins-hbase4:34793] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,505 INFO [RS:3;jenkins-hbase4:34793] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,507 INFO [RS:3;jenkins-hbase4:34793] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:09,511 INFO [RS:3;jenkins-hbase4:34793] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:09,511 INFO [RS:3;jenkins-hbase4:34793] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:09,512 INFO [RS:3;jenkins-hbase4:34793] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:09,511 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:09,513 INFO [RS:3;jenkins-hbase4:34793] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34793 2023-07-24 06:11:09,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/testRename/2370e0157d921fd1b13ab1255ffb9e5f/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 06:11:09,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:09,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2370e0157d921fd1b13ab1255ffb9e5f: 2023-07-24 06:11:09,515 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690179061925.2370e0157d921fd1b13ab1255ffb9e5f. 2023-07-24 06:11:09,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.12 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/.tmp/m/0e598a8d5ef54b2bb87616647539cee8 2023-07-24 06:11:09,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e598a8d5ef54b2bb87616647539cee8 2023-07-24 06:11:09,538 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.92 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/.tmp/info/a36e3352c03c4c6a8f1cea1b31d10f5a 2023-07-24 06:11:09,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/.tmp/m/0e598a8d5ef54b2bb87616647539cee8 as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m/0e598a8d5ef54b2bb87616647539cee8 2023-07-24 06:11:09,544 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,544 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34793,1690179046626 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,545 ERROR [Listener at localhost/46655-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@50ce89f6 rejected from java.util.concurrent.ThreadPoolExecutor@54f10737[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:11:09,545 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37173,1690179042942 2023-07-24 06:11:09,546 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37173,1690179042942] 2023-07-24 06:11:09,546 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37173,1690179042942; numProcessing=1 2023-07-24 06:11:09,548 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37173,1690179042942 already deleted, retry=false 2023-07-24 06:11:09,548 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37173,1690179042942 expired; onlineServers=3 2023-07-24 06:11:09,548 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a36e3352c03c4c6a8f1cea1b31d10f5a 2023-07-24 06:11:09,548 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34793,1690179046626] 2023-07-24 06:11:09,548 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34793,1690179046626; numProcessing=2 2023-07-24 06:11:09,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0e598a8d5ef54b2bb87616647539cee8 2023-07-24 06:11:09,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/m/0e598a8d5ef54b2bb87616647539cee8, entries=22, sequenceid=101, filesize=5.9 K 2023-07-24 06:11:09,551 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34793,1690179046626 already deleted, retry=false 2023-07-24 06:11:09,551 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34793,1690179046626 expired; onlineServers=2 2023-07-24 06:11:09,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.12 KB/22653, heapSize ~36.48 KB/37352, currentSize=0 B/0 for 0aba53baeae40b1c65e437bbd16090b8 in 115ms, sequenceid=101, compaction requested=false 2023-07-24 06:11:09,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 06:11:09,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/rsgroup/0aba53baeae40b1c65e437bbd16090b8/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-24 06:11:09,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:09,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:11:09,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0aba53baeae40b1c65e437bbd16090b8: 2023-07-24 06:11:09,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690179045731.0aba53baeae40b1c65e437bbd16090b8. 2023-07-24 06:11:09,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f7fdbbdd2ac0a663780a488a70ff77f3, disabling compactions & flushes 2023-07-24 06:11:09,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:09,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:09,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. after waiting 0 ms 2023-07-24 06:11:09,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:09,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/.tmp/rep_barrier/41c2b2b44aec49538ee5da482d5daf53 2023-07-24 06:11:09,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/default/unmovedTable/f7fdbbdd2ac0a663780a488a70ff77f3/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 06:11:09,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:09,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f7fdbbdd2ac0a663780a488a70ff77f3: 2023-07-24 06:11:09,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690179063602.f7fdbbdd2ac0a663780a488a70ff77f3. 2023-07-24 06:11:09,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 383d19758bb15afdbebec46f9d69da35, disabling compactions & flushes 2023-07-24 06:11:09,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:11:09,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:11:09,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. after waiting 0 ms 2023-07-24 06:11:09,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:11:09,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 383d19758bb15afdbebec46f9d69da35 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-24 06:11:09,587 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41c2b2b44aec49538ee5da482d5daf53 2023-07-24 06:11:09,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/.tmp/info/244729d9678f47c6982d6c9d0a87cacf 2023-07-24 06:11:09,608 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/.tmp/table/a73eccccecb74eb08e77b684f065e81c 2023-07-24 06:11:09,614 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a73eccccecb74eb08e77b684f065e81c 2023-07-24 06:11:09,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/.tmp/info/244729d9678f47c6982d6c9d0a87cacf as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/info/244729d9678f47c6982d6c9d0a87cacf 2023-07-24 06:11:09,615 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/.tmp/info/a36e3352c03c4c6a8f1cea1b31d10f5a as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/info/a36e3352c03c4c6a8f1cea1b31d10f5a 2023-07-24 06:11:09,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/info/244729d9678f47c6982d6c9d0a87cacf, entries=2, sequenceid=6, filesize=4.8 K 2023-07-24 06:11:09,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a36e3352c03c4c6a8f1cea1b31d10f5a 2023-07-24 06:11:09,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/info/a36e3352c03c4c6a8f1cea1b31d10f5a, entries=97, sequenceid=200, filesize=15.9 K 2023-07-24 06:11:09,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 383d19758bb15afdbebec46f9d69da35 in 42ms, sequenceid=6, compaction requested=false 2023-07-24 06:11:09,624 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/.tmp/rep_barrier/41c2b2b44aec49538ee5da482d5daf53 as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/rep_barrier/41c2b2b44aec49538ee5da482d5daf53 2023-07-24 06:11:09,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/namespace/383d19758bb15afdbebec46f9d69da35/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-24 06:11:09,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:11:09,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 383d19758bb15afdbebec46f9d69da35: 2023-07-24 06:11:09,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690179045409.383d19758bb15afdbebec46f9d69da35. 2023-07-24 06:11:09,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41c2b2b44aec49538ee5da482d5daf53 2023-07-24 06:11:09,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/rep_barrier/41c2b2b44aec49538ee5da482d5daf53, entries=18, sequenceid=200, filesize=6.9 K 2023-07-24 06:11:09,634 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/.tmp/table/a73eccccecb74eb08e77b684f065e81c as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/table/a73eccccecb74eb08e77b684f065e81c 2023-07-24 06:11:09,640 DEBUG [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 06:11:09,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a73eccccecb74eb08e77b684f065e81c 2023-07-24 06:11:09,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/table/a73eccccecb74eb08e77b684f065e81c, entries=31, sequenceid=200, filesize=7.4 K 2023-07-24 06:11:09,641 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.90 KB/79773, heapSize ~122.79 KB/125736, currentSize=0 B/0 for 1588230740 in 202ms, sequenceid=200, compaction requested=false 2023-07-24 06:11:09,649 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/data/hbase/meta/1588230740/recovered.edits/203.seqid, newMaxSeqId=203, maxSeqId=1 2023-07-24 06:11:09,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:09,650 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:09,651 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 06:11:09,651 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:09,680 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38203,1690179042473; all regions closed. 2023-07-24 06:11:09,691 DEBUG [RS:0;jenkins-hbase4:38203] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs 2023-07-24 06:11:09,691 INFO [RS:0;jenkins-hbase4:38203] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38203%2C1690179042473:(num 1690179044984) 2023-07-24 06:11:09,691 DEBUG [RS:0;jenkins-hbase4:38203] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,691 INFO [RS:0;jenkins-hbase4:38203] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,691 INFO [RS:0;jenkins-hbase4:38203] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:09,692 INFO [RS:0;jenkins-hbase4:38203] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:09,692 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:09,692 INFO [RS:0;jenkins-hbase4:38203] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:09,692 INFO [RS:0;jenkins-hbase4:38203] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:09,693 INFO [RS:0;jenkins-hbase4:38203] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38203 2023-07-24 06:11:09,699 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:09,699 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,699 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38203,1690179042473 2023-07-24 06:11:09,700 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38203,1690179042473] 2023-07-24 06:11:09,700 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38203,1690179042473; numProcessing=3 2023-07-24 06:11:09,701 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38203,1690179042473 already deleted, retry=false 2023-07-24 06:11:09,701 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38203,1690179042473 expired; onlineServers=1 2023-07-24 06:11:09,793 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:09,793 INFO [RS:2;jenkins-hbase4:37173] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37173,1690179042942; zookeeper connection closed. 2023-07-24 06:11:09,793 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:37173-0x10195f3f3a20003, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:09,794 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@351d86f9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@351d86f9 2023-07-24 06:11:09,840 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40449,1690179042726; all regions closed. 2023-07-24 06:11:09,846 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/WALs/jenkins-hbase4.apache.org,40449,1690179042726/jenkins-hbase4.apache.org%2C40449%2C1690179042726.meta.1690179045167.meta not finished, retry = 0 2023-07-24 06:11:09,893 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:09,893 INFO [RS:3;jenkins-hbase4:34793] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34793,1690179046626; zookeeper connection closed. 2023-07-24 06:11:09,894 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:34793-0x10195f3f3a2000b, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:09,894 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2b10aaf3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2b10aaf3 2023-07-24 06:11:09,909 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 06:11:09,910 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 06:11:09,952 DEBUG [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs 2023-07-24 06:11:09,952 INFO [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40449%2C1690179042726.meta:.meta(num 1690179045167) 2023-07-24 06:11:09,969 DEBUG [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/oldWALs 2023-07-24 06:11:09,970 INFO [RS:1;jenkins-hbase4:40449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40449%2C1690179042726:(num 1690179044984) 2023-07-24 06:11:09,970 DEBUG [RS:1;jenkins-hbase4:40449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,970 INFO [RS:1;jenkins-hbase4:40449] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:09,970 INFO [RS:1;jenkins-hbase4:40449] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:09,970 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:09,971 INFO [RS:1;jenkins-hbase4:40449] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40449 2023-07-24 06:11:09,974 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40449,1690179042726 2023-07-24 06:11:09,975 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:09,976 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40449,1690179042726] 2023-07-24 06:11:09,976 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40449,1690179042726; numProcessing=4 2023-07-24 06:11:09,977 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40449,1690179042726 already deleted, retry=false 2023-07-24 06:11:09,977 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40449,1690179042726 expired; onlineServers=0 2023-07-24 06:11:09,977 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39303,1690179040397' ***** 2023-07-24 06:11:09,977 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 06:11:09,978 DEBUG [M:0;jenkins-hbase4:39303] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26e4b48d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:09,978 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:09,982 INFO [M:0;jenkins-hbase4:39303] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@47177c10{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 06:11:09,982 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:09,983 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:09,983 INFO [M:0;jenkins-hbase4:39303] server.AbstractConnector(383): Stopped ServerConnector@cbd2559{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:09,983 INFO [M:0;jenkins-hbase4:39303] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:09,983 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:09,987 INFO [M:0;jenkins-hbase4:39303] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1a079e3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:09,988 INFO [M:0;jenkins-hbase4:39303] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2db17b81{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:09,988 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39303,1690179040397 2023-07-24 06:11:09,989 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39303,1690179040397; all regions closed. 2023-07-24 06:11:09,989 DEBUG [M:0;jenkins-hbase4:39303] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:09,989 INFO [M:0;jenkins-hbase4:39303] master.HMaster(1491): Stopping master jetty server 2023-07-24 06:11:09,990 INFO [M:0;jenkins-hbase4:39303] server.AbstractConnector(383): Stopped ServerConnector@70f81b2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:09,990 DEBUG [M:0;jenkins-hbase4:39303] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 06:11:09,990 DEBUG [M:0;jenkins-hbase4:39303] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 06:11:09,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179044597] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179044597,5,FailOnTimeoutGroup] 2023-07-24 06:11:09,990 INFO [M:0;jenkins-hbase4:39303] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 06:11:09,991 INFO [M:0;jenkins-hbase4:39303] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 06:11:09,991 INFO [M:0;jenkins-hbase4:39303] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 06:11:09,991 DEBUG [M:0;jenkins-hbase4:39303] master.HMaster(1512): Stopping service threads 2023-07-24 06:11:09,991 INFO [M:0;jenkins-hbase4:39303] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 06:11:09,991 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 06:11:09,991 ERROR [M:0;jenkins-hbase4:39303] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 06:11:09,992 INFO [M:0;jenkins-hbase4:39303] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 06:11:09,992 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 06:11:09,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179044597] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179044597,5,FailOnTimeoutGroup] 2023-07-24 06:11:09,999 DEBUG [M:0;jenkins-hbase4:39303] zookeeper.ZKUtil(398): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 06:11:09,999 WARN [M:0;jenkins-hbase4:39303] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 06:11:09,999 INFO [M:0;jenkins-hbase4:39303] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 06:11:09,999 INFO [M:0;jenkins-hbase4:39303] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 06:11:09,999 DEBUG [M:0;jenkins-hbase4:39303] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 06:11:09,999 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:09,999 DEBUG [M:0;jenkins-hbase4:39303] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:09,999 DEBUG [M:0;jenkins-hbase4:39303] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 06:11:09,999 DEBUG [M:0;jenkins-hbase4:39303] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:09,999 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=499.85 KB heapSize=597.79 KB 2023-07-24 06:11:10,048 INFO [M:0;jenkins-hbase4:39303] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=499.85 KB at sequenceid=1104 (bloomFilter=true), to=hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c6b274d414684bd3b9b8ffe95e21708e 2023-07-24 06:11:10,061 DEBUG [M:0;jenkins-hbase4:39303] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c6b274d414684bd3b9b8ffe95e21708e as hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c6b274d414684bd3b9b8ffe95e21708e 2023-07-24 06:11:10,074 INFO [M:0;jenkins-hbase4:39303] regionserver.HStore(1080): Added hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c6b274d414684bd3b9b8ffe95e21708e, entries=148, sequenceid=1104, filesize=26.2 K 2023-07-24 06:11:10,075 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegion(2948): Finished flush of dataSize ~499.85 KB/511843, heapSize ~597.77 KB/612120, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 76ms, sequenceid=1104, compaction requested=false 2023-07-24 06:11:10,088 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:10,088 DEBUG [M:0;jenkins-hbase4:39303] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:10,094 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:10,094 INFO [RS:1;jenkins-hbase4:40449] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40449,1690179042726; zookeeper connection closed. 2023-07-24 06:11:10,094 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:40449-0x10195f3f3a20002, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:10,095 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e527a0d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e527a0d 2023-07-24 06:11:10,105 INFO [M:0;jenkins-hbase4:39303] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 06:11:10,106 INFO [M:0;jenkins-hbase4:39303] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39303 2023-07-24 06:11:10,106 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:10,111 DEBUG [M:0;jenkins-hbase4:39303] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39303,1690179040397 already deleted, retry=false 2023-07-24 06:11:10,194 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:10,194 INFO [RS:0;jenkins-hbase4:38203] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38203,1690179042473; zookeeper connection closed. 2023-07-24 06:11:10,194 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): regionserver:38203-0x10195f3f3a20001, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:10,195 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@397a870b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@397a870b 2023-07-24 06:11:10,195 INFO [Listener at localhost/46655] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 06:11:10,295 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:10,295 INFO [M:0;jenkins-hbase4:39303] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39303,1690179040397; zookeeper connection closed. 2023-07-24 06:11:10,295 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): master:39303-0x10195f3f3a20000, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:10,297 WARN [Listener at localhost/46655] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:10,302 INFO [Listener at localhost/46655] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:10,410 WARN [BP-1478983737-172.31.14.131-1690179036409 heartbeating to localhost/127.0.0.1:41501] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:10,410 WARN [BP-1478983737-172.31.14.131-1690179036409 heartbeating to localhost/127.0.0.1:41501] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1478983737-172.31.14.131-1690179036409 (Datanode Uuid f1349270-4b00-4f7a-85d3-0ec53fd9bf86) service to localhost/127.0.0.1:41501 2023-07-24 06:11:10,412 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data5/current/BP-1478983737-172.31.14.131-1690179036409] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:10,412 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data6/current/BP-1478983737-172.31.14.131-1690179036409] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:10,415 WARN [Listener at localhost/46655] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:10,418 INFO [Listener at localhost/46655] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:10,521 WARN [BP-1478983737-172.31.14.131-1690179036409 heartbeating to localhost/127.0.0.1:41501] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:10,521 WARN [BP-1478983737-172.31.14.131-1690179036409 heartbeating to localhost/127.0.0.1:41501] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1478983737-172.31.14.131-1690179036409 (Datanode Uuid 000834ba-e84b-4c5b-8813-4b84d1418ad4) service to localhost/127.0.0.1:41501 2023-07-24 06:11:10,522 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data3/current/BP-1478983737-172.31.14.131-1690179036409] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:10,522 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data4/current/BP-1478983737-172.31.14.131-1690179036409] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:10,524 WARN [Listener at localhost/46655] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:10,530 INFO [Listener at localhost/46655] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:10,605 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:10,605 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 06:11:10,605 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 06:11:10,632 WARN [BP-1478983737-172.31.14.131-1690179036409 heartbeating to localhost/127.0.0.1:41501] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:10,632 WARN [BP-1478983737-172.31.14.131-1690179036409 heartbeating to localhost/127.0.0.1:41501] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1478983737-172.31.14.131-1690179036409 (Datanode Uuid 37b2cf48-1551-4d50-81c2-781c9d3bfc61) service to localhost/127.0.0.1:41501 2023-07-24 06:11:10,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data1/current/BP-1478983737-172.31.14.131-1690179036409] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:10,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/cluster_b3419ee1-e611-0316-02da-22a5ce1ea1be/dfs/data/data2/current/BP-1478983737-172.31.14.131-1690179036409] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:10,664 INFO [Listener at localhost/46655] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:10,783 INFO [Listener at localhost/46655] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.log.dir so I do NOT create it in target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80e1ad84-8a7a-b5ef-a23f-86c94b87db87/hadoop.tmp.dir so I do NOT create it in target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f, deleteOnExit=true 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/test.cache.data in system properties and HBase conf 2023-07-24 06:11:10,849 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 06:11:10,850 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir in system properties and HBase conf 2023-07-24 06:11:10,850 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 06:11:10,850 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 06:11:10,850 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 06:11:10,850 DEBUG [Listener at localhost/46655] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 06:11:10,851 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 06:11:10,852 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/nfs.dump.dir in system properties and HBase conf 2023-07-24 06:11:10,852 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir in system properties and HBase conf 2023-07-24 06:11:10,852 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 06:11:10,852 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 06:11:10,852 INFO [Listener at localhost/46655] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 06:11:10,856 WARN [Listener at localhost/46655] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 06:11:10,856 WARN [Listener at localhost/46655] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 06:11:10,879 DEBUG [Listener at localhost/46655-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10195f3f3a2000a, quorum=127.0.0.1:54990, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 06:11:10,879 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10195f3f3a2000a, quorum=127.0.0.1:54990, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 06:11:10,903 WARN [Listener at localhost/46655] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:10,905 INFO [Listener at localhost/46655] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:10,909 INFO [Listener at localhost/46655] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/Jetty_localhost_45187_hdfs____3cx9q1/webapp 2023-07-24 06:11:11,007 INFO [Listener at localhost/46655] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45187 2023-07-24 06:11:11,012 WARN [Listener at localhost/46655] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 06:11:11,012 WARN [Listener at localhost/46655] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 06:11:11,059 WARN [Listener at localhost/43327] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:11,074 WARN [Listener at localhost/43327] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:11:11,076 WARN [Listener at localhost/43327] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:11,077 INFO [Listener at localhost/43327] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:11,082 INFO [Listener at localhost/43327] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/Jetty_localhost_36893_datanode____.la4m8a/webapp 2023-07-24 06:11:11,176 INFO [Listener at localhost/43327] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36893 2023-07-24 06:11:11,183 WARN [Listener at localhost/46523] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:11,222 WARN [Listener at localhost/46523] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:11:11,225 WARN [Listener at localhost/46523] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:11,227 INFO [Listener at localhost/46523] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:11,230 INFO [Listener at localhost/46523] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/Jetty_localhost_46075_datanode____lr3lf7/webapp 2023-07-24 06:11:11,310 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7c1f9c87086e0e8b: Processing first storage report for DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1 from datanode d4729379-b093-4dc7-a6ba-b786b2567f05 2023-07-24 06:11:11,311 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7c1f9c87086e0e8b: from storage DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1 node DatanodeRegistration(127.0.0.1:38637, datanodeUuid=d4729379-b093-4dc7-a6ba-b786b2567f05, infoPort=35987, infoSecurePort=0, ipcPort=46523, storageInfo=lv=-57;cid=testClusterID;nsid=999773081;c=1690179070859), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:11,311 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7c1f9c87086e0e8b: Processing first storage report for DS-f99bdbdb-ba4a-40af-956a-e99f5a1c2835 from datanode d4729379-b093-4dc7-a6ba-b786b2567f05 2023-07-24 06:11:11,311 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7c1f9c87086e0e8b: from storage DS-f99bdbdb-ba4a-40af-956a-e99f5a1c2835 node DatanodeRegistration(127.0.0.1:38637, datanodeUuid=d4729379-b093-4dc7-a6ba-b786b2567f05, infoPort=35987, infoSecurePort=0, ipcPort=46523, storageInfo=lv=-57;cid=testClusterID;nsid=999773081;c=1690179070859), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:11,337 INFO [Listener at localhost/46523] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46075 2023-07-24 06:11:11,345 WARN [Listener at localhost/37759] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:11,375 WARN [Listener at localhost/37759] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:11:11,383 WARN [Listener at localhost/37759] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:11,384 INFO [Listener at localhost/37759] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:11,389 INFO [Listener at localhost/37759] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/Jetty_localhost_43677_datanode____.q4dwbm/webapp 2023-07-24 06:11:11,464 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e44d72831acf6c4: Processing first storage report for DS-296a7015-f975-4b57-a5d5-6cfa14e7e007 from datanode de68e7e5-08b7-4aa4-b94e-7f1182383163 2023-07-24 06:11:11,464 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e44d72831acf6c4: from storage DS-296a7015-f975-4b57-a5d5-6cfa14e7e007 node DatanodeRegistration(127.0.0.1:37577, datanodeUuid=de68e7e5-08b7-4aa4-b94e-7f1182383163, infoPort=42559, infoSecurePort=0, ipcPort=37759, storageInfo=lv=-57;cid=testClusterID;nsid=999773081;c=1690179070859), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:11,464 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9e44d72831acf6c4: Processing first storage report for DS-0a9f99a6-e534-4553-95de-f6bbd59564e0 from datanode de68e7e5-08b7-4aa4-b94e-7f1182383163 2023-07-24 06:11:11,464 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9e44d72831acf6c4: from storage DS-0a9f99a6-e534-4553-95de-f6bbd59564e0 node DatanodeRegistration(127.0.0.1:37577, datanodeUuid=de68e7e5-08b7-4aa4-b94e-7f1182383163, infoPort=42559, infoSecurePort=0, ipcPort=37759, storageInfo=lv=-57;cid=testClusterID;nsid=999773081;c=1690179070859), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:11,499 INFO [Listener at localhost/37759] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43677 2023-07-24 06:11:11,505 WARN [Listener at localhost/33861] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:11,627 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7682da4c5d9d9ab: Processing first storage report for DS-62979269-6ba3-44f0-844f-39cacf582eec from datanode f67a515a-385d-4ef8-a221-e9774b2814a8 2023-07-24 06:11:11,627 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7682da4c5d9d9ab: from storage DS-62979269-6ba3-44f0-844f-39cacf582eec node DatanodeRegistration(127.0.0.1:46281, datanodeUuid=f67a515a-385d-4ef8-a221-e9774b2814a8, infoPort=33949, infoSecurePort=0, ipcPort=33861, storageInfo=lv=-57;cid=testClusterID;nsid=999773081;c=1690179070859), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:11,627 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7682da4c5d9d9ab: Processing first storage report for DS-e9882580-a26a-43f0-b868-8574e3b3491f from datanode f67a515a-385d-4ef8-a221-e9774b2814a8 2023-07-24 06:11:11,627 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7682da4c5d9d9ab: from storage DS-e9882580-a26a-43f0-b868-8574e3b3491f node DatanodeRegistration(127.0.0.1:46281, datanodeUuid=f67a515a-385d-4ef8-a221-e9774b2814a8, infoPort=33949, infoSecurePort=0, ipcPort=33861, storageInfo=lv=-57;cid=testClusterID;nsid=999773081;c=1690179070859), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:11,726 DEBUG [Listener at localhost/33861] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1 2023-07-24 06:11:11,729 INFO [Listener at localhost/33861] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/zookeeper_0, clientPort=57631, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 06:11:11,731 INFO [Listener at localhost/33861] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57631 2023-07-24 06:11:11,731 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:11,732 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:11,753 INFO [Listener at localhost/33861] util.FSUtils(471): Created version file at hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58 with version=8 2023-07-24 06:11:11,753 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/hbase-staging 2023-07-24 06:11:11,754 DEBUG [Listener at localhost/33861] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 06:11:11,754 DEBUG [Listener at localhost/33861] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 06:11:11,754 DEBUG [Listener at localhost/33861] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 06:11:11,754 DEBUG [Listener at localhost/33861] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 06:11:11,755 INFO [Listener at localhost/33861] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:11,756 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:11,756 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:11,756 INFO [Listener at localhost/33861] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:11,756 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:11,756 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:11,756 INFO [Listener at localhost/33861] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:11,757 INFO [Listener at localhost/33861] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44691 2023-07-24 06:11:11,758 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:11,759 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:11,761 INFO [Listener at localhost/33861] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44691 connecting to ZooKeeper ensemble=127.0.0.1:57631 2023-07-24 06:11:11,771 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:446910x0, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:11,771 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44691-0x10195f471f30000 connected 2023-07-24 06:11:11,792 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:11,794 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:11,794 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:11,802 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44691 2023-07-24 06:11:11,803 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44691 2023-07-24 06:11:11,804 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44691 2023-07-24 06:11:11,804 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44691 2023-07-24 06:11:11,804 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44691 2023-07-24 06:11:11,807 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:11,807 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:11,807 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:11,807 INFO [Listener at localhost/33861] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 06:11:11,807 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:11,807 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:11,808 INFO [Listener at localhost/33861] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:11,808 INFO [Listener at localhost/33861] http.HttpServer(1146): Jetty bound to port 39421 2023-07-24 06:11:11,808 INFO [Listener at localhost/33861] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:11,815 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:11,816 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ea129d6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:11,816 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:11,816 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@730dd3c1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:11,933 INFO [Listener at localhost/33861] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:11,934 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:11,934 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:11,934 INFO [Listener at localhost/33861] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:11:11,935 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:11,936 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5c80d780{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/jetty-0_0_0_0-39421-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3247612091585940854/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 06:11:11,938 INFO [Listener at localhost/33861] server.AbstractConnector(333): Started ServerConnector@1915edd{HTTP/1.1, (http/1.1)}{0.0.0.0:39421} 2023-07-24 06:11:11,938 INFO [Listener at localhost/33861] server.Server(415): Started @37539ms 2023-07-24 06:11:11,938 INFO [Listener at localhost/33861] master.HMaster(444): hbase.rootdir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58, hbase.cluster.distributed=false 2023-07-24 06:11:11,953 INFO [Listener at localhost/33861] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:11,954 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:11,954 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:11,954 INFO [Listener at localhost/33861] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:11,954 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:11,954 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:11,954 INFO [Listener at localhost/33861] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:11,956 INFO [Listener at localhost/33861] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35937 2023-07-24 06:11:11,956 INFO [Listener at localhost/33861] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:11,957 DEBUG [Listener at localhost/33861] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:11,958 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:11,959 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:11,960 INFO [Listener at localhost/33861] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35937 connecting to ZooKeeper ensemble=127.0.0.1:57631 2023-07-24 06:11:11,965 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:359370x0, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:11,966 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35937-0x10195f471f30001 connected 2023-07-24 06:11:11,966 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:11,967 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:11,968 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:11,968 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35937 2023-07-24 06:11:11,968 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35937 2023-07-24 06:11:11,972 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35937 2023-07-24 06:11:11,972 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 06:11:11,975 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35937 2023-07-24 06:11:11,977 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35937 2023-07-24 06:11:11,981 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:11,981 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:11,981 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:11,982 INFO [Listener at localhost/33861] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:11,982 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:11,983 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:11,983 INFO [Listener at localhost/33861] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:11,984 INFO [Listener at localhost/33861] http.HttpServer(1146): Jetty bound to port 42869 2023-07-24 06:11:11,984 INFO [Listener at localhost/33861] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:12,000 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,001 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38a9d218{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:12,001 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,001 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7ebbee1c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:12,129 INFO [Listener at localhost/33861] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:12,130 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:12,130 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:12,131 INFO [Listener at localhost/33861] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:11:12,132 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,133 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@16c56918{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/jetty-0_0_0_0-42869-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4263609864065628773/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:12,135 INFO [Listener at localhost/33861] server.AbstractConnector(333): Started ServerConnector@c7ee5df{HTTP/1.1, (http/1.1)}{0.0.0.0:42869} 2023-07-24 06:11:12,135 INFO [Listener at localhost/33861] server.Server(415): Started @37736ms 2023-07-24 06:11:12,147 INFO [Listener at localhost/33861] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:12,147 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:12,148 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:12,148 INFO [Listener at localhost/33861] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:12,148 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:12,148 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:12,148 INFO [Listener at localhost/33861] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:12,150 INFO [Listener at localhost/33861] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35909 2023-07-24 06:11:12,150 INFO [Listener at localhost/33861] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:12,151 DEBUG [Listener at localhost/33861] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:12,152 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:12,153 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:12,154 INFO [Listener at localhost/33861] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35909 connecting to ZooKeeper ensemble=127.0.0.1:57631 2023-07-24 06:11:12,160 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:359090x0, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:12,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35909-0x10195f471f30002 connected 2023-07-24 06:11:12,161 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:12,162 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:12,163 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:12,163 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35909 2023-07-24 06:11:12,163 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35909 2023-07-24 06:11:12,163 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35909 2023-07-24 06:11:12,164 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35909 2023-07-24 06:11:12,164 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35909 2023-07-24 06:11:12,166 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:12,166 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:12,166 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:12,167 INFO [Listener at localhost/33861] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:12,167 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:12,167 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:12,168 INFO [Listener at localhost/33861] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:12,168 INFO [Listener at localhost/33861] http.HttpServer(1146): Jetty bound to port 33193 2023-07-24 06:11:12,169 INFO [Listener at localhost/33861] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:12,172 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,172 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ab99eeb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:12,172 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,173 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2dd59250{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:12,293 INFO [Listener at localhost/33861] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:12,294 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:12,294 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:12,294 INFO [Listener at localhost/33861] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:11:12,295 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,296 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@55941c90{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/jetty-0_0_0_0-33193-hbase-server-2_4_18-SNAPSHOT_jar-_-any-27110266099329574/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:12,297 INFO [Listener at localhost/33861] server.AbstractConnector(333): Started ServerConnector@6bb9847c{HTTP/1.1, (http/1.1)}{0.0.0.0:33193} 2023-07-24 06:11:12,297 INFO [Listener at localhost/33861] server.Server(415): Started @37898ms 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:12,313 INFO [Listener at localhost/33861] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:12,315 INFO [Listener at localhost/33861] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42129 2023-07-24 06:11:12,316 INFO [Listener at localhost/33861] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:12,317 DEBUG [Listener at localhost/33861] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:12,317 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:12,318 INFO [Listener at localhost/33861] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:12,320 INFO [Listener at localhost/33861] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42129 connecting to ZooKeeper ensemble=127.0.0.1:57631 2023-07-24 06:11:12,323 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:421290x0, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:12,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42129-0x10195f471f30003 connected 2023-07-24 06:11:12,325 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:12,325 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:12,326 DEBUG [Listener at localhost/33861] zookeeper.ZKUtil(164): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:12,326 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42129 2023-07-24 06:11:12,327 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42129 2023-07-24 06:11:12,330 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42129 2023-07-24 06:11:12,333 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42129 2023-07-24 06:11:12,333 DEBUG [Listener at localhost/33861] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42129 2023-07-24 06:11:12,335 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:12,335 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:12,335 INFO [Listener at localhost/33861] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:12,336 INFO [Listener at localhost/33861] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:12,336 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:12,336 INFO [Listener at localhost/33861] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:12,336 INFO [Listener at localhost/33861] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:12,337 INFO [Listener at localhost/33861] http.HttpServer(1146): Jetty bound to port 40209 2023-07-24 06:11:12,337 INFO [Listener at localhost/33861] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:12,340 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,340 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a1ad62a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:12,340 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,340 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6cd10080{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:12,459 INFO [Listener at localhost/33861] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:12,461 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:12,461 INFO [Listener at localhost/33861] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:12,461 INFO [Listener at localhost/33861] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:11:12,462 INFO [Listener at localhost/33861] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:12,464 INFO [Listener at localhost/33861] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7fb867a7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/java.io.tmpdir/jetty-0_0_0_0-40209-hbase-server-2_4_18-SNAPSHOT_jar-_-any-228505282334701961/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:12,465 INFO [Listener at localhost/33861] server.AbstractConnector(333): Started ServerConnector@1934427{HTTP/1.1, (http/1.1)}{0.0.0.0:40209} 2023-07-24 06:11:12,466 INFO [Listener at localhost/33861] server.Server(415): Started @38066ms 2023-07-24 06:11:12,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:12,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1f62082d{HTTP/1.1, (http/1.1)}{0.0.0.0:33195} 2023-07-24 06:11:12,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38072ms 2023-07-24 06:11:12,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,473 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 06:11:12,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,476 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:12,476 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:12,476 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,476 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:12,476 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:12,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:11:12,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:11:12,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44691,1690179071755 from backup master directory 2023-07-24 06:11:12,480 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,480 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 06:11:12,480 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:12,480 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/hbase.id with ID: 5dcdf353-e77d-416b-bd19-8714635fbb43 2023-07-24 06:11:12,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:12,521 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3f15d7b1 to 127.0.0.1:57631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:12,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@702d03ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:12,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:12,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 06:11:12,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:12,559 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store-tmp 2023-07-24 06:11:12,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:12,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 06:11:12,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:12,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:12,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 06:11:12,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:12,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:12,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:12,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/WALs/jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44691%2C1690179071755, suffix=, logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/WALs/jenkins-hbase4.apache.org,44691,1690179071755, archiveDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/oldWALs, maxLogs=10 2023-07-24 06:11:12,617 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK] 2023-07-24 06:11:12,619 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK] 2023-07-24 06:11:12,623 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK] 2023-07-24 06:11:12,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/WALs/jenkins-hbase4.apache.org,44691,1690179071755/jenkins-hbase4.apache.org%2C44691%2C1690179071755.1690179072594 2023-07-24 06:11:12,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK], DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK], DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK]] 2023-07-24 06:11:12,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:12,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:12,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:12,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:12,632 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:12,633 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 06:11:12,634 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 06:11:12,635 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:12,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:12,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:12,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:12,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:12,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10595743040, jitterRate=-0.013194531202316284}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:12,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:12,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 06:11:12,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 06:11:12,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 06:11:12,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 06:11:12,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 06:11:12,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 06:11:12,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 06:11:12,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 06:11:12,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 06:11:12,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 06:11:12,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 06:11:12,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 06:11:12,652 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 06:11:12,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 06:11:12,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 06:11:12,655 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:12,655 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:12,655 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:12,655 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:12,656 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,656 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44691,1690179071755, sessionid=0x10195f471f30000, setting cluster-up flag (Was=false) 2023-07-24 06:11:12,661 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 06:11:12,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,670 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 06:11:12,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:12,675 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.hbase-snapshot/.tmp 2023-07-24 06:11:12,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 06:11:12,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 06:11:12,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 06:11:12,678 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:12,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 06:11:12,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 06:11:12,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 06:11:12,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 06:11:12,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 06:11:12,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 06:11:12,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:12,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690179102697 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 06:11:12,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,699 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 06:11:12,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 06:11:12,699 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 06:11:12,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 06:11:12,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 06:11:12,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 06:11:12,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 06:11:12,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179072700,5,FailOnTimeoutGroup] 2023-07-24 06:11:12,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179072700,5,FailOnTimeoutGroup] 2023-07-24 06:11:12,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 06:11:12,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,701 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:12,719 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:12,719 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:12,720 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58 2023-07-24 06:11:12,729 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:12,731 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 06:11:12,732 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/info 2023-07-24 06:11:12,732 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 06:11:12,733 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:12,733 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 06:11:12,734 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:11:12,735 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 06:11:12,735 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:12,735 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 06:11:12,736 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/table 2023-07-24 06:11:12,737 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 06:11:12,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:12,738 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740 2023-07-24 06:11:12,738 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740 2023-07-24 06:11:12,740 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 06:11:12,741 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 06:11:12,744 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:12,744 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10694805600, jitterRate=-0.0039686113595962524}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 06:11:12,744 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 06:11:12,744 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 06:11:12,744 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 06:11:12,744 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 06:11:12,745 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 06:11:12,745 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 06:11:12,745 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:12,745 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 06:11:12,746 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 06:11:12,746 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 06:11:12,746 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 06:11:12,748 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 06:11:12,750 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 06:11:12,768 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(951): ClusterId : 5dcdf353-e77d-416b-bd19-8714635fbb43 2023-07-24 06:11:12,768 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(951): ClusterId : 5dcdf353-e77d-416b-bd19-8714635fbb43 2023-07-24 06:11:12,770 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:12,768 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(951): ClusterId : 5dcdf353-e77d-416b-bd19-8714635fbb43 2023-07-24 06:11:12,771 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:12,772 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:12,775 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:12,776 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:12,777 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:12,777 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:12,777 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:12,777 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:12,778 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:12,779 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ReadOnlyZKClient(139): Connect 0x56c18f3a to 127.0.0.1:57631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:12,780 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:12,781 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ReadOnlyZKClient(139): Connect 0x35d21800 to 127.0.0.1:57631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:12,787 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:12,791 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ReadOnlyZKClient(139): Connect 0x3a7f31cb to 127.0.0.1:57631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:12,803 DEBUG [RS:1;jenkins-hbase4:35909] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58067b92, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:12,804 DEBUG [RS:1;jenkins-hbase4:35909] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c39e44a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:12,807 DEBUG [RS:0;jenkins-hbase4:35937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f2a7370, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:12,807 DEBUG [RS:2;jenkins-hbase4:42129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30875abb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:12,807 DEBUG [RS:0;jenkins-hbase4:35937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a28eaf1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:12,807 DEBUG [RS:2;jenkins-hbase4:42129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7585d2cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:12,818 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35909 2023-07-24 06:11:12,818 INFO [RS:1;jenkins-hbase4:35909] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:12,818 INFO [RS:1;jenkins-hbase4:35909] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:12,818 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:12,819 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44691,1690179071755 with isa=jenkins-hbase4.apache.org/172.31.14.131:35909, startcode=1690179072147 2023-07-24 06:11:12,819 DEBUG [RS:1;jenkins-hbase4:35909] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:12,819 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35937 2023-07-24 06:11:12,819 INFO [RS:0;jenkins-hbase4:35937] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:12,819 INFO [RS:0;jenkins-hbase4:35937] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:12,819 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:12,820 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44691,1690179071755 with isa=jenkins-hbase4.apache.org/172.31.14.131:35937, startcode=1690179071953 2023-07-24 06:11:12,820 DEBUG [RS:0;jenkins-hbase4:35937] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:12,820 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:42129 2023-07-24 06:11:12,820 INFO [RS:2;jenkins-hbase4:42129] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:12,820 INFO [RS:2;jenkins-hbase4:42129] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:12,820 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:12,821 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44691,1690179071755 with isa=jenkins-hbase4.apache.org/172.31.14.131:42129, startcode=1690179072312 2023-07-24 06:11:12,821 DEBUG [RS:2;jenkins-hbase4:42129] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:12,824 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38161, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:12,831 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44691] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,831 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:12,832 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 06:11:12,832 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52709, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:12,832 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42801, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:12,832 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58 2023-07-24 06:11:12,832 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44691] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,832 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43327 2023-07-24 06:11:12,832 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39421 2023-07-24 06:11:12,832 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:12,832 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44691] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,832 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 06:11:12,833 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:12,833 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58 2023-07-24 06:11:12,833 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 06:11:12,833 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43327 2023-07-24 06:11:12,833 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39421 2023-07-24 06:11:12,833 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58 2023-07-24 06:11:12,833 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43327 2023-07-24 06:11:12,833 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39421 2023-07-24 06:11:12,837 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:12,840 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ZKUtil(162): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35909,1690179072147] 2023-07-24 06:11:12,840 WARN [RS:2;jenkins-hbase4:42129] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:12,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42129,1690179072312] 2023-07-24 06:11:12,840 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ZKUtil(162): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35937,1690179071953] 2023-07-24 06:11:12,840 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ZKUtil(162): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,840 WARN [RS:1;jenkins-hbase4:35909] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:12,840 INFO [RS:2;jenkins-hbase4:42129] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:12,840 INFO [RS:1;jenkins-hbase4:35909] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:12,840 WARN [RS:0;jenkins-hbase4:35937] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:12,841 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,841 INFO [RS:0;jenkins-hbase4:35937] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:12,841 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,841 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,853 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ZKUtil(162): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,853 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ZKUtil(162): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,853 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ZKUtil(162): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,853 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ZKUtil(162): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,853 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ZKUtil(162): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,853 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ZKUtil(162): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,854 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ZKUtil(162): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,855 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ZKUtil(162): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,855 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ZKUtil(162): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,855 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:12,855 INFO [RS:1;jenkins-hbase4:35909] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:12,855 DEBUG [RS:0;jenkins-hbase4:35937] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:12,857 INFO [RS:0;jenkins-hbase4:35937] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:12,858 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:12,858 INFO [RS:1;jenkins-hbase4:35909] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:12,860 INFO [RS:2;jenkins-hbase4:42129] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:12,861 INFO [RS:1;jenkins-hbase4:35909] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:12,862 INFO [RS:0;jenkins-hbase4:35937] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:12,862 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,862 INFO [RS:2;jenkins-hbase4:42129] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:12,862 INFO [RS:0;jenkins-hbase4:35937] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:12,862 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,862 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:12,868 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:12,869 INFO [RS:2;jenkins-hbase4:42129] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:12,869 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,869 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:12,869 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,870 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,870 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,870 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,871 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:12,871 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:12,872 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:1;jenkins-hbase4:35909] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,871 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:0;jenkins-hbase4:35937] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,872 DEBUG [RS:2;jenkins-hbase4:42129] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:12,880 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,880 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,880 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,880 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,880 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,880 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,881 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,881 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,881 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,881 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,881 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,881 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,893 INFO [RS:0;jenkins-hbase4:35937] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:12,894 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35937,1690179071953-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,895 INFO [RS:2;jenkins-hbase4:42129] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:12,895 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42129,1690179072312-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,898 INFO [RS:1;jenkins-hbase4:35909] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:12,898 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35909,1690179072147-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,900 DEBUG [jenkins-hbase4:44691] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 06:11:12,900 DEBUG [jenkins-hbase4:44691] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:12,900 DEBUG [jenkins-hbase4:44691] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:12,900 DEBUG [jenkins-hbase4:44691] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:12,900 DEBUG [jenkins-hbase4:44691] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:12,900 DEBUG [jenkins-hbase4:44691] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:12,902 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35909,1690179072147, state=OPENING 2023-07-24 06:11:12,905 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 06:11:12,906 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:12,906 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35909,1690179072147}] 2023-07-24 06:11:12,907 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 06:11:12,911 INFO [RS:0;jenkins-hbase4:35937] regionserver.Replication(203): jenkins-hbase4.apache.org,35937,1690179071953 started 2023-07-24 06:11:12,911 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35937,1690179071953, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35937, sessionid=0x10195f471f30001 2023-07-24 06:11:12,911 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:12,911 DEBUG [RS:0;jenkins-hbase4:35937] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,911 DEBUG [RS:0;jenkins-hbase4:35937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35937,1690179071953' 2023-07-24 06:11:12,911 DEBUG [RS:0;jenkins-hbase4:35937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:12,911 DEBUG [RS:0;jenkins-hbase4:35937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:12,912 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:12,912 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:12,912 DEBUG [RS:0;jenkins-hbase4:35937] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:12,912 DEBUG [RS:0;jenkins-hbase4:35937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35937,1690179071953' 2023-07-24 06:11:12,912 DEBUG [RS:0;jenkins-hbase4:35937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:12,913 DEBUG [RS:0;jenkins-hbase4:35937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:12,913 DEBUG [RS:0;jenkins-hbase4:35937] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:12,913 INFO [RS:0;jenkins-hbase4:35937] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 06:11:12,914 INFO [RS:2;jenkins-hbase4:42129] regionserver.Replication(203): jenkins-hbase4.apache.org,42129,1690179072312 started 2023-07-24 06:11:12,914 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42129,1690179072312, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42129, sessionid=0x10195f471f30003 2023-07-24 06:11:12,916 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,916 INFO [RS:1;jenkins-hbase4:35909] regionserver.Replication(203): jenkins-hbase4.apache.org,35909,1690179072147 started 2023-07-24 06:11:12,916 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:12,916 DEBUG [RS:2;jenkins-hbase4:42129] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,916 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35909,1690179072147, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35909, sessionid=0x10195f471f30002 2023-07-24 06:11:12,916 DEBUG [RS:2;jenkins-hbase4:42129] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42129,1690179072312' 2023-07-24 06:11:12,916 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:12,916 DEBUG [RS:1;jenkins-hbase4:35909] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,916 DEBUG [RS:1;jenkins-hbase4:35909] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35909,1690179072147' 2023-07-24 06:11:12,916 DEBUG [RS:1;jenkins-hbase4:35909] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:12,916 DEBUG [RS:2;jenkins-hbase4:42129] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:12,916 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ZKUtil(398): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 06:11:12,917 INFO [RS:0;jenkins-hbase4:35937] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 06:11:12,917 DEBUG [RS:2;jenkins-hbase4:42129] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:12,917 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:12,917 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:12,917 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,917 DEBUG [RS:2;jenkins-hbase4:42129] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:12,918 DEBUG [RS:2;jenkins-hbase4:42129] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42129,1690179072312' 2023-07-24 06:11:12,918 DEBUG [RS:2;jenkins-hbase4:42129] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:12,918 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,918 DEBUG [RS:2;jenkins-hbase4:42129] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:12,920 DEBUG [RS:2;jenkins-hbase4:42129] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:12,920 INFO [RS:2;jenkins-hbase4:42129] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 06:11:12,920 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,923 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ZKUtil(398): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 06:11:12,923 INFO [RS:2;jenkins-hbase4:42129] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 06:11:12,923 DEBUG [RS:1;jenkins-hbase4:35909] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:12,923 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,923 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,923 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:12,923 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:12,923 DEBUG [RS:1;jenkins-hbase4:35909] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:12,924 DEBUG [RS:1;jenkins-hbase4:35909] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35909,1690179072147' 2023-07-24 06:11:12,924 DEBUG [RS:1;jenkins-hbase4:35909] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:12,924 DEBUG [RS:1;jenkins-hbase4:35909] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:12,924 DEBUG [RS:1;jenkins-hbase4:35909] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:12,925 INFO [RS:1;jenkins-hbase4:35909] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 06:11:12,925 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,925 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ZKUtil(398): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 06:11:12,925 INFO [RS:1;jenkins-hbase4:35909] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 06:11:12,925 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,926 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:12,991 WARN [ReadOnlyZKClient-127.0.0.1:57631@0x3f15d7b1] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 06:11:12,991 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:12,994 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38974, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:12,995 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35909] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:38974 deadline: 1690179132995, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:13,022 INFO [RS:0;jenkins-hbase4:35937] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35937%2C1690179071953, suffix=, logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35937,1690179071953, archiveDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs, maxLogs=32 2023-07-24 06:11:13,025 INFO [RS:2;jenkins-hbase4:42129] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42129%2C1690179072312, suffix=, logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,42129,1690179072312, archiveDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs, maxLogs=32 2023-07-24 06:11:13,028 INFO [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35909%2C1690179072147, suffix=, logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35909,1690179072147, archiveDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs, maxLogs=32 2023-07-24 06:11:13,048 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK] 2023-07-24 06:11:13,057 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK] 2023-07-24 06:11:13,057 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK] 2023-07-24 06:11:13,057 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK] 2023-07-24 06:11:13,058 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK] 2023-07-24 06:11:13,058 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK] 2023-07-24 06:11:13,064 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:13,066 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:11:13,066 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK] 2023-07-24 06:11:13,073 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK] 2023-07-24 06:11:13,074 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK] 2023-07-24 06:11:13,075 INFO [RS:2;jenkins-hbase4:42129] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,42129,1690179072312/jenkins-hbase4.apache.org%2C42129%2C1690179072312.1690179073026 2023-07-24 06:11:13,081 INFO [RS:0;jenkins-hbase4:35937] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35937,1690179071953/jenkins-hbase4.apache.org%2C35937%2C1690179071953.1690179073023 2023-07-24 06:11:13,082 DEBUG [RS:2;jenkins-hbase4:42129] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK], DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK], DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK]] 2023-07-24 06:11:13,085 DEBUG [RS:0;jenkins-hbase4:35937] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK], DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK], DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK]] 2023-07-24 06:11:13,085 INFO [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35909,1690179072147/jenkins-hbase4.apache.org%2C35909%2C1690179072147.1690179073029 2023-07-24 06:11:13,086 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38980, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:11:13,086 DEBUG [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK], DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK], DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK]] 2023-07-24 06:11:13,094 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 06:11:13,094 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:13,096 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35909%2C1690179072147.meta, suffix=.meta, logDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35909,1690179072147, archiveDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs, maxLogs=32 2023-07-24 06:11:13,115 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK] 2023-07-24 06:11:13,115 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK] 2023-07-24 06:11:13,120 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK] 2023-07-24 06:11:13,122 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/WALs/jenkins-hbase4.apache.org,35909,1690179072147/jenkins-hbase4.apache.org%2C35909%2C1690179072147.meta.1690179073096.meta 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38637,DS-4ad8f5c5-744a-4209-a2f7-eb1a4f9927c1,DISK], DatanodeInfoWithStorage[127.0.0.1:37577,DS-296a7015-f975-4b57-a5d5-6cfa14e7e007,DISK], DatanodeInfoWithStorage[127.0.0.1:46281,DS-62979269-6ba3-44f0-844f-39cacf582eec,DISK]] 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 06:11:13,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 06:11:13,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 06:11:13,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 06:11:13,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/info 2023-07-24 06:11:13,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/info 2023-07-24 06:11:13,128 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 06:11:13,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:13,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 06:11:13,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:11:13,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:11:13,131 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 06:11:13,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:13,132 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 06:11:13,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/table 2023-07-24 06:11:13,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/table 2023-07-24 06:11:13,133 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 06:11:13,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:13,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740 2023-07-24 06:11:13,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740 2023-07-24 06:11:13,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 06:11:13,139 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 06:11:13,140 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11710878720, jitterRate=0.09066057205200195}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 06:11:13,140 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 06:11:13,141 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690179073064 2023-07-24 06:11:13,146 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 06:11:13,146 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 06:11:13,146 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35909,1690179072147, state=OPEN 2023-07-24 06:11:13,148 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 06:11:13,148 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 06:11:13,149 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 06:11:13,150 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35909,1690179072147 in 242 msec 2023-07-24 06:11:13,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 06:11:13,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 403 msec 2023-07-24 06:11:13,153 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 472 msec 2023-07-24 06:11:13,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690179073153, completionTime=-1 2023-07-24 06:11:13,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 06:11:13,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 06:11:13,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 06:11:13,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690179133159 2023-07-24 06:11:13,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690179193159 2023-07-24 06:11:13,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44691,1690179071755-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44691,1690179071755-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44691,1690179071755-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44691, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 06:11:13,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:13,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 06:11:13,168 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 06:11:13,170 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:13,171 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:13,172 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,173 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4 empty. 2023-07-24 06:11:13,173 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,173 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 06:11:13,186 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:13,188 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ab1743b12c3c40199d951ab9e788e8a4, NAME => 'hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp 2023-07-24 06:11:13,198 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:13,198 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ab1743b12c3c40199d951ab9e788e8a4, disabling compactions & flushes 2023-07-24 06:11:13,198 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,198 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,198 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. after waiting 0 ms 2023-07-24 06:11:13,198 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,198 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,198 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ab1743b12c3c40199d951ab9e788e8a4: 2023-07-24 06:11:13,201 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:13,202 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179073202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179073202"}]},"ts":"1690179073202"} 2023-07-24 06:11:13,204 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:13,205 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:13,205 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179073205"}]},"ts":"1690179073205"} 2023-07-24 06:11:13,206 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 06:11:13,209 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:13,210 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:13,210 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:13,210 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:13,210 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:13,210 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ab1743b12c3c40199d951ab9e788e8a4, ASSIGN}] 2023-07-24 06:11:13,212 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ab1743b12c3c40199d951ab9e788e8a4, ASSIGN 2023-07-24 06:11:13,213 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ab1743b12c3c40199d951ab9e788e8a4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42129,1690179072312; forceNewPlan=false, retain=false 2023-07-24 06:11:13,297 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:13,299 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 06:11:13,301 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:13,302 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:13,303 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,304 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca empty. 2023-07-24 06:11:13,304 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,305 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 06:11:13,323 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:13,330 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ea3018329bd0900b80ece2725e52bcca, NAME => 'hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp 2023-07-24 06:11:13,345 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:13,345 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing ea3018329bd0900b80ece2725e52bcca, disabling compactions & flushes 2023-07-24 06:11:13,345 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,345 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,345 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. after waiting 0 ms 2023-07-24 06:11:13,345 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,345 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,345 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for ea3018329bd0900b80ece2725e52bcca: 2023-07-24 06:11:13,348 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:13,349 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179073349"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179073349"}]},"ts":"1690179073349"} 2023-07-24 06:11:13,351 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:13,352 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:13,352 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179073352"}]},"ts":"1690179073352"} 2023-07-24 06:11:13,353 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 06:11:13,358 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:13,358 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:13,358 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:13,358 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:13,358 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:13,358 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ea3018329bd0900b80ece2725e52bcca, ASSIGN}] 2023-07-24 06:11:13,362 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ea3018329bd0900b80ece2725e52bcca, ASSIGN 2023-07-24 06:11:13,363 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ea3018329bd0900b80ece2725e52bcca, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42129,1690179072312; forceNewPlan=false, retain=false 2023-07-24 06:11:13,363 INFO [jenkins-hbase4:44691] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:13,364 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ab1743b12c3c40199d951ab9e788e8a4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:13,365 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179073364"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179073364"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179073364"}]},"ts":"1690179073364"} 2023-07-24 06:11:13,366 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure ab1743b12c3c40199d951ab9e788e8a4, server=jenkins-hbase4.apache.org,42129,1690179072312}] 2023-07-24 06:11:13,515 INFO [jenkins-hbase4:44691] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:13,516 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=ea3018329bd0900b80ece2725e52bcca, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:13,516 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179073516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179073516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179073516"}]},"ts":"1690179073516"} 2023-07-24 06:11:13,525 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure ea3018329bd0900b80ece2725e52bcca, server=jenkins-hbase4.apache.org,42129,1690179072312}] 2023-07-24 06:11:13,525 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:13,525 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:11:13,530 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51500, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:11:13,548 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,548 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab1743b12c3c40199d951ab9e788e8a4, NAME => 'hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,549 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,551 INFO [StoreOpener-ab1743b12c3c40199d951ab9e788e8a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,552 DEBUG [StoreOpener-ab1743b12c3c40199d951ab9e788e8a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/info 2023-07-24 06:11:13,552 DEBUG [StoreOpener-ab1743b12c3c40199d951ab9e788e8a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/info 2023-07-24 06:11:13,553 INFO [StoreOpener-ab1743b12c3c40199d951ab9e788e8a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab1743b12c3c40199d951ab9e788e8a4 columnFamilyName info 2023-07-24 06:11:13,554 INFO [StoreOpener-ab1743b12c3c40199d951ab9e788e8a4-1] regionserver.HStore(310): Store=ab1743b12c3c40199d951ab9e788e8a4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:13,554 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,559 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:13,564 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:13,565 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab1743b12c3c40199d951ab9e788e8a4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9446292960, jitterRate=-0.12024541199207306}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:13,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab1743b12c3c40199d951ab9e788e8a4: 2023-07-24 06:11:13,566 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4., pid=8, masterSystemTime=1690179073525 2023-07-24 06:11:13,571 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,572 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:13,572 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ab1743b12c3c40199d951ab9e788e8a4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:13,572 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179073572"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179073572"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179073572"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179073572"}]},"ts":"1690179073572"} 2023-07-24 06:11:13,576 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-24 06:11:13,576 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure ab1743b12c3c40199d951ab9e788e8a4, server=jenkins-hbase4.apache.org,42129,1690179072312 in 208 msec 2023-07-24 06:11:13,577 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 06:11:13,577 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ab1743b12c3c40199d951ab9e788e8a4, ASSIGN in 366 msec 2023-07-24 06:11:13,578 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:13,578 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179073578"}]},"ts":"1690179073578"} 2023-07-24 06:11:13,580 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 06:11:13,582 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:13,584 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 415 msec 2023-07-24 06:11:13,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 06:11:13,671 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:13,671 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:13,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:13,677 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:13,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 06:11:13,686 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ea3018329bd0900b80ece2725e52bcca, NAME => 'hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:13,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:11:13,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. service=MultiRowMutationService 2023-07-24 06:11:13,687 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 06:11:13,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:13,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,691 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:13,694 INFO [StoreOpener-ea3018329bd0900b80ece2725e52bcca-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,696 DEBUG [StoreOpener-ea3018329bd0900b80ece2725e52bcca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/m 2023-07-24 06:11:13,696 DEBUG [StoreOpener-ea3018329bd0900b80ece2725e52bcca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/m 2023-07-24 06:11:13,697 INFO [StoreOpener-ea3018329bd0900b80ece2725e52bcca-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ea3018329bd0900b80ece2725e52bcca columnFamilyName m 2023-07-24 06:11:13,697 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-07-24 06:11:13,698 INFO [StoreOpener-ea3018329bd0900b80ece2725e52bcca-1] regionserver.HStore(310): Store=ea3018329bd0900b80ece2725e52bcca/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:13,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,703 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:13,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 06:11:13,706 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-24 06:11:13,706 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 06:11:13,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:13,709 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ea3018329bd0900b80ece2725e52bcca; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@67649b3c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:13,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ea3018329bd0900b80ece2725e52bcca: 2023-07-24 06:11:13,710 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca., pid=9, masterSystemTime=1690179073681 2023-07-24 06:11:13,712 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,712 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:13,713 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=ea3018329bd0900b80ece2725e52bcca, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:13,713 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179073713"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179073713"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179073713"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179073713"}]},"ts":"1690179073713"} 2023-07-24 06:11:13,717 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 06:11:13,717 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure ea3018329bd0900b80ece2725e52bcca, server=jenkins-hbase4.apache.org,42129,1690179072312 in 194 msec 2023-07-24 06:11:13,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 06:11:13,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ea3018329bd0900b80ece2725e52bcca, ASSIGN in 359 msec 2023-07-24 06:11:13,734 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:13,737 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 33 msec 2023-07-24 06:11:13,738 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:13,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179073738"}]},"ts":"1690179073738"} 2023-07-24 06:11:13,740 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 06:11:13,742 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:13,744 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 445 msec 2023-07-24 06:11:13,750 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 06:11:13,753 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 06:11:13,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.273sec 2023-07-24 06:11:13,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 06:11:13,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:13,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 06:11:13,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 06:11:13,757 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:13,758 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:13,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 06:11:13,760 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:13,761 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e empty. 2023-07-24 06:11:13,761 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:13,761 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 06:11:13,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 06:11:13,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 06:11:13,773 DEBUG [Listener at localhost/33861] zookeeper.ReadOnlyZKClient(139): Connect 0x3c08d8b8 to 127.0.0.1:57631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:13,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:13,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 06:11:13,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 06:11:13,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44691,1690179071755-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 06:11:13,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44691,1690179071755-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 06:11:13,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 06:11:13,808 DEBUG [Listener at localhost/33861] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a8e6c2f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:13,809 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 06:11:13,810 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 06:11:13,815 DEBUG [hconnection-0x271dc52-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:13,816 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:13,816 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:13,817 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:13,819 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:11:13,820 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44691,1690179071755] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 06:11:13,821 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:13,823 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ac594261181b23b6000dc7dbad5aa6e, NAME => 'hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp 2023-07-24 06:11:13,825 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:13,825 INFO [Listener at localhost/33861] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:13,828 DEBUG [Listener at localhost/33861] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 06:11:13,833 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 06:11:13,838 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 06:11:13,838 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:13,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 06:11:13,840 DEBUG [Listener at localhost/33861] zookeeper.ReadOnlyZKClient(139): Connect 0x3d692739 to 127.0.0.1:57631 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:13,851 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:13,852 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 0ac594261181b23b6000dc7dbad5aa6e, disabling compactions & flushes 2023-07-24 06:11:13,852 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:13,852 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:13,852 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. after waiting 0 ms 2023-07-24 06:11:13,853 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:13,853 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:13,853 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 0ac594261181b23b6000dc7dbad5aa6e: 2023-07-24 06:11:13,857 DEBUG [Listener at localhost/33861] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31f6ff8a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:13,857 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:13,857 INFO [Listener at localhost/33861] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57631 2023-07-24 06:11:13,859 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690179073859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179073859"}]},"ts":"1690179073859"} 2023-07-24 06:11:13,860 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:13,861 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:13,862 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:13,862 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179073862"}]},"ts":"1690179073862"} 2023-07-24 06:11:13,863 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 06:11:13,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-24 06:11:13,869 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:13,869 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:13,869 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:13,869 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:13,869 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:13,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=0ac594261181b23b6000dc7dbad5aa6e, ASSIGN}] 2023-07-24 06:11:13,872 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10195f471f3000a connected 2023-07-24 06:11:13,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-24 06:11:13,875 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=0ac594261181b23b6000dc7dbad5aa6e, ASSIGN 2023-07-24 06:11:13,876 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=0ac594261181b23b6000dc7dbad5aa6e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42129,1690179072312; forceNewPlan=false, retain=false 2023-07-24 06:11:13,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 06:11:13,893 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:13,900 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 26 msec 2023-07-24 06:11:13,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 06:11:13,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:13,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-24 06:11:13,997 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:13,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-24 06:11:13,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 06:11:13,999 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:14,000 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:11:14,002 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:14,003 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,004 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 empty. 2023-07-24 06:11:14,004 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,004 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 06:11:14,027 INFO [jenkins-hbase4:44691] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:14,029 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ac594261181b23b6000dc7dbad5aa6e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:14,029 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690179074029"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179074029"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179074029"}]},"ts":"1690179074029"} 2023-07-24 06:11:14,033 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 0ac594261181b23b6000dc7dbad5aa6e, server=jenkins-hbase4.apache.org,42129,1690179072312}] 2023-07-24 06:11:14,034 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:14,040 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 494f8813ecc112cf3b3c314cfa487e48, NAME => 'np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp 2023-07-24 06:11:14,062 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:14,062 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 494f8813ecc112cf3b3c314cfa487e48, disabling compactions & flushes 2023-07-24 06:11:14,062 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,062 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,062 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. after waiting 0 ms 2023-07-24 06:11:14,062 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,062 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,062 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 494f8813ecc112cf3b3c314cfa487e48: 2023-07-24 06:11:14,065 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:14,066 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179074066"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179074066"}]},"ts":"1690179074066"} 2023-07-24 06:11:14,067 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:14,068 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:14,068 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179074068"}]},"ts":"1690179074068"} 2023-07-24 06:11:14,069 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-24 06:11:14,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:14,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:14,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:14,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:14,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:14,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, ASSIGN}] 2023-07-24 06:11:14,075 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, ASSIGN 2023-07-24 06:11:14,076 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35909,1690179072147; forceNewPlan=false, retain=false 2023-07-24 06:11:14,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 06:11:14,196 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:14,196 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ac594261181b23b6000dc7dbad5aa6e, NAME => 'hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:14,197 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,197 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:14,197 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,197 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,198 INFO [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,200 DEBUG [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e/q 2023-07-24 06:11:14,200 DEBUG [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e/q 2023-07-24 06:11:14,200 INFO [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ac594261181b23b6000dc7dbad5aa6e columnFamilyName q 2023-07-24 06:11:14,201 INFO [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] regionserver.HStore(310): Store=0ac594261181b23b6000dc7dbad5aa6e/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:14,201 INFO [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,204 DEBUG [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e/u 2023-07-24 06:11:14,204 DEBUG [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e/u 2023-07-24 06:11:14,204 INFO [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ac594261181b23b6000dc7dbad5aa6e columnFamilyName u 2023-07-24 06:11:14,205 INFO [StoreOpener-0ac594261181b23b6000dc7dbad5aa6e-1] regionserver.HStore(310): Store=0ac594261181b23b6000dc7dbad5aa6e/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:14,206 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,206 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,208 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 06:11:14,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:14,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:14,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0ac594261181b23b6000dc7dbad5aa6e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9545142720, jitterRate=-0.11103931069374084}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 06:11:14,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0ac594261181b23b6000dc7dbad5aa6e: 2023-07-24 06:11:14,220 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e., pid=16, masterSystemTime=1690179074191 2023-07-24 06:11:14,221 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:14,221 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:14,222 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ac594261181b23b6000dc7dbad5aa6e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:14,222 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690179074221"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179074221"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179074221"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179074221"}]},"ts":"1690179074221"} 2023-07-24 06:11:14,224 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-24 06:11:14,225 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 0ac594261181b23b6000dc7dbad5aa6e, server=jenkins-hbase4.apache.org,42129,1690179072312 in 190 msec 2023-07-24 06:11:14,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 06:11:14,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=0ac594261181b23b6000dc7dbad5aa6e, ASSIGN in 355 msec 2023-07-24 06:11:14,226 INFO [jenkins-hbase4:44691] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:14,228 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=494f8813ecc112cf3b3c314cfa487e48, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:14,228 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179074228"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179074228"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179074228"}]},"ts":"1690179074228"} 2023-07-24 06:11:14,228 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:14,228 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179074228"}]},"ts":"1690179074228"} 2023-07-24 06:11:14,229 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 494f8813ecc112cf3b3c314cfa487e48, server=jenkins-hbase4.apache.org,35909,1690179072147}] 2023-07-24 06:11:14,230 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 06:11:14,232 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:14,234 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 480 msec 2023-07-24 06:11:14,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 06:11:14,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 494f8813ecc112cf3b3c314cfa487e48, NAME => 'np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:14,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:14,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,387 INFO [StoreOpener-494f8813ecc112cf3b3c314cfa487e48-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,388 DEBUG [StoreOpener-494f8813ecc112cf3b3c314cfa487e48-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/fam1 2023-07-24 06:11:14,388 DEBUG [StoreOpener-494f8813ecc112cf3b3c314cfa487e48-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/fam1 2023-07-24 06:11:14,389 INFO [StoreOpener-494f8813ecc112cf3b3c314cfa487e48-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 494f8813ecc112cf3b3c314cfa487e48 columnFamilyName fam1 2023-07-24 06:11:14,390 INFO [StoreOpener-494f8813ecc112cf3b3c314cfa487e48-1] regionserver.HStore(310): Store=494f8813ecc112cf3b3c314cfa487e48/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:14,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:14,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 494f8813ecc112cf3b3c314cfa487e48; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11370405280, jitterRate=0.05895151197910309}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:14,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 494f8813ecc112cf3b3c314cfa487e48: 2023-07-24 06:11:14,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48., pid=18, masterSystemTime=1690179074381 2023-07-24 06:11:14,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,399 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=494f8813ecc112cf3b3c314cfa487e48, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:14,399 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179074399"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179074399"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179074399"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179074399"}]},"ts":"1690179074399"} 2023-07-24 06:11:14,401 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 06:11:14,402 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 494f8813ecc112cf3b3c314cfa487e48, server=jenkins-hbase4.apache.org,35909,1690179072147 in 171 msec 2023-07-24 06:11:14,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-24 06:11:14,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, ASSIGN in 328 msec 2023-07-24 06:11:14,404 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:14,404 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179074404"}]},"ts":"1690179074404"} 2023-07-24 06:11:14,405 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-24 06:11:14,407 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:14,408 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 414 msec 2023-07-24 06:11:14,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 06:11:14,601 INFO [Listener at localhost/33861] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-24 06:11:14,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:14,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-24 06:11:14,606 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:14,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-24 06:11:14,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 06:11:14,630 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-24 06:11:14,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 06:11:14,710 INFO [Listener at localhost/33861] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-24 06:11:14,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:14,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:14,713 INFO [Listener at localhost/33861] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-24 06:11:14,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-24 06:11:14,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-24 06:11:14,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 06:11:14,717 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179074717"}]},"ts":"1690179074717"} 2023-07-24 06:11:14,718 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-24 06:11:14,720 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-24 06:11:14,720 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, UNASSIGN}] 2023-07-24 06:11:14,721 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, UNASSIGN 2023-07-24 06:11:14,722 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=494f8813ecc112cf3b3c314cfa487e48, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:14,722 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179074722"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179074722"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179074722"}]},"ts":"1690179074722"} 2023-07-24 06:11:14,723 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 494f8813ecc112cf3b3c314cfa487e48, server=jenkins-hbase4.apache.org,35909,1690179072147}] 2023-07-24 06:11:14,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 06:11:14,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 494f8813ecc112cf3b3c314cfa487e48, disabling compactions & flushes 2023-07-24 06:11:14,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. after waiting 0 ms 2023-07-24 06:11:14,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:14,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48. 2023-07-24 06:11:14,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 494f8813ecc112cf3b3c314cfa487e48: 2023-07-24 06:11:14,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:14,884 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=494f8813ecc112cf3b3c314cfa487e48, regionState=CLOSED 2023-07-24 06:11:14,884 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179074883"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179074883"}]},"ts":"1690179074883"} 2023-07-24 06:11:14,886 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-24 06:11:14,886 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 494f8813ecc112cf3b3c314cfa487e48, server=jenkins-hbase4.apache.org,35909,1690179072147 in 162 msec 2023-07-24 06:11:14,887 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-24 06:11:14,887 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=494f8813ecc112cf3b3c314cfa487e48, UNASSIGN in 166 msec 2023-07-24 06:11:14,888 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179074888"}]},"ts":"1690179074888"} 2023-07-24 06:11:14,889 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-24 06:11:14,892 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-24 06:11:14,893 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 179 msec 2023-07-24 06:11:15,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 06:11:15,019 INFO [Listener at localhost/33861] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-24 06:11:15,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-24 06:11:15,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-24 06:11:15,023 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 06:11:15,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-24 06:11:15,023 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 06:11:15,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:15,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:11:15,027 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:15,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 06:11:15,029 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/fam1, FileablePath, hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/recovered.edits] 2023-07-24 06:11:15,034 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/recovered.edits/4.seqid to hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/archive/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48/recovered.edits/4.seqid 2023-07-24 06:11:15,035 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/.tmp/data/np1/table1/494f8813ecc112cf3b3c314cfa487e48 2023-07-24 06:11:15,035 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 06:11:15,038 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 06:11:15,039 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-24 06:11:15,042 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-24 06:11:15,043 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 06:11:15,043 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-24 06:11:15,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179075043"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:15,044 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 06:11:15,045 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 494f8813ecc112cf3b3c314cfa487e48, NAME => 'np1:table1,,1690179073993.494f8813ecc112cf3b3c314cfa487e48.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 06:11:15,045 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-24 06:11:15,045 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179075045"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:15,046 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-24 06:11:15,049 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 06:11:15,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-24 06:11:15,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 06:11:15,130 INFO [Listener at localhost/33861] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-24 06:11:15,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-24 06:11:15,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-24 06:11:15,146 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 06:11:15,149 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 06:11:15,151 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 06:11:15,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 06:11:15,152 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-24 06:11:15,152 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:15,153 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 06:11:15,155 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 06:11:15,156 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 18 msec 2023-07-24 06:11:15,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44691] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 06:11:15,253 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 06:11:15,254 INFO [Listener at localhost/33861] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 06:11:15,254 DEBUG [Listener at localhost/33861] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c08d8b8 to 127.0.0.1:57631 2023-07-24 06:11:15,254 DEBUG [Listener at localhost/33861] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,254 DEBUG [Listener at localhost/33861] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 06:11:15,254 DEBUG [Listener at localhost/33861] util.JVMClusterUtil(257): Found active master hash=1371291900, stopped=false 2023-07-24 06:11:15,254 DEBUG [Listener at localhost/33861] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 06:11:15,254 DEBUG [Listener at localhost/33861] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 06:11:15,255 DEBUG [Listener at localhost/33861] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 06:11:15,255 INFO [Listener at localhost/33861] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:15,262 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:15,262 INFO [Listener at localhost/33861] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 06:11:15,262 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:15,262 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:15,263 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:15,262 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:15,264 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:15,264 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:15,265 DEBUG [Listener at localhost/33861] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f15d7b1 to 127.0.0.1:57631 2023-07-24 06:11:15,264 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:15,265 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:15,265 DEBUG [Listener at localhost/33861] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,265 INFO [Listener at localhost/33861] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35937,1690179071953' ***** 2023-07-24 06:11:15,265 INFO [Listener at localhost/33861] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:15,265 INFO [Listener at localhost/33861] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35909,1690179072147' ***** 2023-07-24 06:11:15,266 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:15,266 INFO [Listener at localhost/33861] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:15,266 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:15,267 INFO [Listener at localhost/33861] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42129,1690179072312' ***** 2023-07-24 06:11:15,272 INFO [Listener at localhost/33861] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:15,274 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:15,281 INFO [RS:0;jenkins-hbase4:35937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@16c56918{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:15,282 INFO [RS:2;jenkins-hbase4:42129] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7fb867a7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:15,282 INFO [RS:1;jenkins-hbase4:35909] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@55941c90{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:15,282 INFO [RS:2;jenkins-hbase4:42129] server.AbstractConnector(383): Stopped ServerConnector@1934427{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:15,282 INFO [RS:1;jenkins-hbase4:35909] server.AbstractConnector(383): Stopped ServerConnector@6bb9847c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:15,282 INFO [RS:2;jenkins-hbase4:42129] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:15,282 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:15,282 INFO [RS:0;jenkins-hbase4:35937] server.AbstractConnector(383): Stopped ServerConnector@c7ee5df{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:15,284 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:15,284 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:15,283 INFO [RS:2;jenkins-hbase4:42129] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6cd10080{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:15,283 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:15,282 INFO [RS:1;jenkins-hbase4:35909] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:15,287 INFO [RS:2;jenkins-hbase4:42129] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a1ad62a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:15,287 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:15,288 INFO [RS:1;jenkins-hbase4:35909] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2dd59250{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:15,288 INFO [RS:1;jenkins-hbase4:35909] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ab99eeb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:15,287 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:15,284 INFO [RS:0;jenkins-hbase4:35937] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:15,289 INFO [RS:2;jenkins-hbase4:42129] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:15,290 INFO [RS:1;jenkins-hbase4:35909] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:15,290 INFO [RS:0;jenkins-hbase4:35937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7ebbee1c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:15,290 INFO [RS:2;jenkins-hbase4:42129] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:15,290 INFO [RS:0;jenkins-hbase4:35937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38a9d218{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:15,290 INFO [RS:1;jenkins-hbase4:35909] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:15,290 INFO [RS:1;jenkins-hbase4:35909] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:15,290 INFO [RS:2;jenkins-hbase4:42129] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:15,291 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:15,291 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(3305): Received CLOSE for 0ac594261181b23b6000dc7dbad5aa6e 2023-07-24 06:11:15,291 INFO [RS:0;jenkins-hbase4:35937] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:15,291 DEBUG [RS:1;jenkins-hbase4:35909] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x56c18f3a to 127.0.0.1:57631 2023-07-24 06:11:15,291 DEBUG [RS:1;jenkins-hbase4:35909] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,291 INFO [RS:1;jenkins-hbase4:35909] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:15,292 INFO [RS:1;jenkins-hbase4:35909] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:15,292 INFO [RS:1;jenkins-hbase4:35909] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:15,292 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 06:11:15,292 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(3305): Received CLOSE for ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:15,292 INFO [RS:0;jenkins-hbase4:35937] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:15,292 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(3305): Received CLOSE for ab1743b12c3c40199d951ab9e788e8a4 2023-07-24 06:11:15,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0ac594261181b23b6000dc7dbad5aa6e, disabling compactions & flushes 2023-07-24 06:11:15,292 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 06:11:15,292 INFO [RS:0;jenkins-hbase4:35937] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:15,293 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:15,293 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 06:11:15,293 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-24 06:11:15,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:15,293 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:15,294 DEBUG [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 06:11:15,294 DEBUG [RS:2;jenkins-hbase4:42129] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a7f31cb to 127.0.0.1:57631 2023-07-24 06:11:15,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:15,293 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 06:11:15,293 DEBUG [RS:0;jenkins-hbase4:35937] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x35d21800 to 127.0.0.1:57631 2023-07-24 06:11:15,294 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 06:11:15,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. after waiting 0 ms 2023-07-24 06:11:15,294 DEBUG [RS:2;jenkins-hbase4:42129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:15,294 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 06:11:15,294 DEBUG [RS:0;jenkins-hbase4:35937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,295 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 06:11:15,295 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-24 06:11:15,295 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35937,1690179071953; all regions closed. 2023-07-24 06:11:15,295 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-24 06:11:15,295 DEBUG [RS:0;jenkins-hbase4:35937] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 06:11:15,295 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1478): Online Regions={0ac594261181b23b6000dc7dbad5aa6e=hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e., ea3018329bd0900b80ece2725e52bcca=hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca., ab1743b12c3c40199d951ab9e788e8a4=hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4.} 2023-07-24 06:11:15,295 DEBUG [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1504): Waiting on 0ac594261181b23b6000dc7dbad5aa6e, ab1743b12c3c40199d951ab9e788e8a4, ea3018329bd0900b80ece2725e52bcca 2023-07-24 06:11:15,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/quota/0ac594261181b23b6000dc7dbad5aa6e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:15,309 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:15,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0ac594261181b23b6000dc7dbad5aa6e: 2023-07-24 06:11:15,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690179073753.0ac594261181b23b6000dc7dbad5aa6e. 2023-07-24 06:11:15,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ea3018329bd0900b80ece2725e52bcca, disabling compactions & flushes 2023-07-24 06:11:15,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:15,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:15,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. after waiting 0 ms 2023-07-24 06:11:15,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:15,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ea3018329bd0900b80ece2725e52bcca 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-24 06:11:15,312 DEBUG [RS:0;jenkins-hbase4:35937] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs 2023-07-24 06:11:15,312 INFO [RS:0;jenkins-hbase4:35937] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35937%2C1690179071953:(num 1690179073023) 2023-07-24 06:11:15,312 DEBUG [RS:0;jenkins-hbase4:35937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,313 INFO [RS:0;jenkins-hbase4:35937] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:15,313 INFO [RS:0;jenkins-hbase4:35937] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:15,313 INFO [RS:0;jenkins-hbase4:35937] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:15,313 INFO [RS:0;jenkins-hbase4:35937] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:15,313 INFO [RS:0;jenkins-hbase4:35937] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:15,313 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:15,314 INFO [RS:0;jenkins-hbase4:35937] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35937 2023-07-24 06:11:15,325 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:15,325 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:15,325 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:15,325 ERROR [Listener at localhost/33861-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6bd04c88 rejected from java.util.concurrent.ThreadPoolExecutor@6fefd52a[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-24 06:11:15,326 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35937,1690179071953 2023-07-24 06:11:15,326 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:15,326 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:15,325 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:15,327 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35937,1690179071953] 2023-07-24 06:11:15,327 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35937,1690179071953; numProcessing=1 2023-07-24 06:11:15,329 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35937,1690179071953 already deleted, retry=false 2023-07-24 06:11:15,329 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35937,1690179071953 expired; onlineServers=2 2023-07-24 06:11:15,331 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/.tmp/info/0bb501e0880440a089f0048dc78e34e5 2023-07-24 06:11:15,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/.tmp/m/ad9486f5732447c684fca5c69dc012c7 2023-07-24 06:11:15,341 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0bb501e0880440a089f0048dc78e34e5 2023-07-24 06:11:15,345 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/.tmp/m/ad9486f5732447c684fca5c69dc012c7 as hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/m/ad9486f5732447c684fca5c69dc012c7 2023-07-24 06:11:15,354 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/.tmp/rep_barrier/569c4db5f93a4ecd80081eac5a4d8a9f 2023-07-24 06:11:15,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/m/ad9486f5732447c684fca5c69dc012c7, entries=1, sequenceid=7, filesize=4.9 K 2023-07-24 06:11:15,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for ea3018329bd0900b80ece2725e52bcca in 45ms, sequenceid=7, compaction requested=false 2023-07-24 06:11:15,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 06:11:15,361 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 569c4db5f93a4ecd80081eac5a4d8a9f 2023-07-24 06:11:15,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/rsgroup/ea3018329bd0900b80ece2725e52bcca/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-24 06:11:15,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:15,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:15,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ea3018329bd0900b80ece2725e52bcca: 2023-07-24 06:11:15,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690179073297.ea3018329bd0900b80ece2725e52bcca. 2023-07-24 06:11:15,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab1743b12c3c40199d951ab9e788e8a4, disabling compactions & flushes 2023-07-24 06:11:15,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:15,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:15,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. after waiting 0 ms 2023-07-24 06:11:15,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:15,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ab1743b12c3c40199d951ab9e788e8a4 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-24 06:11:15,375 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/.tmp/table/1dfde53dedaa4beb846ec97da134003f 2023-07-24 06:11:15,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/.tmp/info/ad87e51523e74173b2cc1d22f258788d 2023-07-24 06:11:15,441 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1dfde53dedaa4beb846ec97da134003f 2023-07-24 06:11:15,443 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/.tmp/info/0bb501e0880440a089f0048dc78e34e5 as hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/info/0bb501e0880440a089f0048dc78e34e5 2023-07-24 06:11:15,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ad87e51523e74173b2cc1d22f258788d 2023-07-24 06:11:15,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/.tmp/info/ad87e51523e74173b2cc1d22f258788d as hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/info/ad87e51523e74173b2cc1d22f258788d 2023-07-24 06:11:15,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0bb501e0880440a089f0048dc78e34e5 2023-07-24 06:11:15,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/info/0bb501e0880440a089f0048dc78e34e5, entries=32, sequenceid=31, filesize=8.5 K 2023-07-24 06:11:15,450 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/.tmp/rep_barrier/569c4db5f93a4ecd80081eac5a4d8a9f as hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/rep_barrier/569c4db5f93a4ecd80081eac5a4d8a9f 2023-07-24 06:11:15,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ad87e51523e74173b2cc1d22f258788d 2023-07-24 06:11:15,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/info/ad87e51523e74173b2cc1d22f258788d, entries=3, sequenceid=8, filesize=5.0 K 2023-07-24 06:11:15,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for ab1743b12c3c40199d951ab9e788e8a4 in 91ms, sequenceid=8, compaction requested=false 2023-07-24 06:11:15,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 06:11:15,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 569c4db5f93a4ecd80081eac5a4d8a9f 2023-07-24 06:11:15,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/rep_barrier/569c4db5f93a4ecd80081eac5a4d8a9f, entries=1, sequenceid=31, filesize=4.9 K 2023-07-24 06:11:15,457 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/.tmp/table/1dfde53dedaa4beb846ec97da134003f as hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/table/1dfde53dedaa4beb846ec97da134003f 2023-07-24 06:11:15,463 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:15,463 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35937-0x10195f471f30001, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:15,463 INFO [RS:0;jenkins-hbase4:35937] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35937,1690179071953; zookeeper connection closed. 2023-07-24 06:11:15,464 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@55e73470] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@55e73470 2023-07-24 06:11:15,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/namespace/ab1743b12c3c40199d951ab9e788e8a4/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-24 06:11:15,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:15,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab1743b12c3c40199d951ab9e788e8a4: 2023-07-24 06:11:15,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690179073167.ab1743b12c3c40199d951ab9e788e8a4. 2023-07-24 06:11:15,467 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1dfde53dedaa4beb846ec97da134003f 2023-07-24 06:11:15,467 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/table/1dfde53dedaa4beb846ec97da134003f, entries=8, sequenceid=31, filesize=5.2 K 2023-07-24 06:11:15,468 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 173ms, sequenceid=31, compaction requested=false 2023-07-24 06:11:15,468 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 06:11:15,484 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-24 06:11:15,484 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:15,485 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:15,485 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 06:11:15,485 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:15,494 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35909,1690179072147; all regions closed. 2023-07-24 06:11:15,494 DEBUG [RS:1;jenkins-hbase4:35909] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 06:11:15,496 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42129,1690179072312; all regions closed. 2023-07-24 06:11:15,497 DEBUG [RS:2;jenkins-hbase4:42129] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 06:11:15,507 DEBUG [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs 2023-07-24 06:11:15,507 INFO [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35909%2C1690179072147.meta:.meta(num 1690179073096) 2023-07-24 06:11:15,508 DEBUG [RS:2;jenkins-hbase4:42129] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs 2023-07-24 06:11:15,508 INFO [RS:2;jenkins-hbase4:42129] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42129%2C1690179072312:(num 1690179073026) 2023-07-24 06:11:15,508 DEBUG [RS:2;jenkins-hbase4:42129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,508 INFO [RS:2;jenkins-hbase4:42129] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:15,508 INFO [RS:2;jenkins-hbase4:42129] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:15,508 INFO [RS:2;jenkins-hbase4:42129] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:15,508 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:15,508 INFO [RS:2;jenkins-hbase4:42129] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:15,510 INFO [RS:2;jenkins-hbase4:42129] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:15,511 INFO [RS:2;jenkins-hbase4:42129] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42129 2023-07-24 06:11:15,514 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:15,515 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:15,515 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42129,1690179072312 2023-07-24 06:11:15,516 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42129,1690179072312] 2023-07-24 06:11:15,516 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42129,1690179072312; numProcessing=2 2023-07-24 06:11:15,516 DEBUG [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/oldWALs 2023-07-24 06:11:15,516 INFO [RS:1;jenkins-hbase4:35909] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35909%2C1690179072147:(num 1690179073029) 2023-07-24 06:11:15,516 DEBUG [RS:1;jenkins-hbase4:35909] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,516 INFO [RS:1;jenkins-hbase4:35909] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:15,517 INFO [RS:1;jenkins-hbase4:35909] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:15,517 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:15,517 INFO [RS:1;jenkins-hbase4:35909] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35909 2023-07-24 06:11:15,616 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:15,616 INFO [RS:2;jenkins-hbase4:42129] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42129,1690179072312; zookeeper connection closed. 2023-07-24 06:11:15,616 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:42129-0x10195f471f30003, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:15,617 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@754c8350] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@754c8350 2023-07-24 06:11:15,618 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42129,1690179072312 already deleted, retry=false 2023-07-24 06:11:15,619 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:15,619 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42129,1690179072312 expired; onlineServers=1 2023-07-24 06:11:15,618 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35909,1690179072147 2023-07-24 06:11:15,621 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35909,1690179072147] 2023-07-24 06:11:15,621 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35909,1690179072147; numProcessing=3 2023-07-24 06:11:15,622 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35909,1690179072147 already deleted, retry=false 2023-07-24 06:11:15,622 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35909,1690179072147 expired; onlineServers=0 2023-07-24 06:11:15,622 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44691,1690179071755' ***** 2023-07-24 06:11:15,622 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 06:11:15,623 DEBUG [M:0;jenkins-hbase4:44691] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5583ce13, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:15,623 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:15,625 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:15,625 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:15,625 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:15,625 INFO [M:0;jenkins-hbase4:44691] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5c80d780{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 06:11:15,626 INFO [M:0;jenkins-hbase4:44691] server.AbstractConnector(383): Stopped ServerConnector@1915edd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:15,626 INFO [M:0;jenkins-hbase4:44691] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:15,626 INFO [M:0;jenkins-hbase4:44691] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@730dd3c1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:15,626 INFO [M:0;jenkins-hbase4:44691] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ea129d6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:15,627 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44691,1690179071755 2023-07-24 06:11:15,627 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44691,1690179071755; all regions closed. 2023-07-24 06:11:15,627 DEBUG [M:0;jenkins-hbase4:44691] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:15,627 INFO [M:0;jenkins-hbase4:44691] master.HMaster(1491): Stopping master jetty server 2023-07-24 06:11:15,627 INFO [M:0;jenkins-hbase4:44691] server.AbstractConnector(383): Stopped ServerConnector@1f62082d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:15,628 DEBUG [M:0;jenkins-hbase4:44691] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 06:11:15,628 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 06:11:15,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179072700] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179072700,5,FailOnTimeoutGroup] 2023-07-24 06:11:15,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179072700] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179072700,5,FailOnTimeoutGroup] 2023-07-24 06:11:15,628 DEBUG [M:0;jenkins-hbase4:44691] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 06:11:15,630 INFO [M:0;jenkins-hbase4:44691] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 06:11:15,630 INFO [M:0;jenkins-hbase4:44691] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 06:11:15,630 INFO [M:0;jenkins-hbase4:44691] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:15,630 DEBUG [M:0;jenkins-hbase4:44691] master.HMaster(1512): Stopping service threads 2023-07-24 06:11:15,630 INFO [M:0;jenkins-hbase4:44691] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 06:11:15,631 ERROR [M:0;jenkins-hbase4:44691] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 06:11:15,631 INFO [M:0;jenkins-hbase4:44691] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 06:11:15,631 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 06:11:15,631 DEBUG [M:0;jenkins-hbase4:44691] zookeeper.ZKUtil(398): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 06:11:15,631 WARN [M:0;jenkins-hbase4:44691] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 06:11:15,631 INFO [M:0;jenkins-hbase4:44691] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 06:11:15,632 INFO [M:0;jenkins-hbase4:44691] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 06:11:15,632 DEBUG [M:0;jenkins-hbase4:44691] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 06:11:15,632 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:15,633 DEBUG [M:0;jenkins-hbase4:44691] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:15,633 DEBUG [M:0;jenkins-hbase4:44691] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 06:11:15,633 DEBUG [M:0;jenkins-hbase4:44691] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:15,633 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-24 06:11:15,650 INFO [M:0;jenkins-hbase4:44691] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/957b7a1fc2944235a64ae876692f1993 2023-07-24 06:11:15,656 DEBUG [M:0;jenkins-hbase4:44691] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/957b7a1fc2944235a64ae876692f1993 as hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/957b7a1fc2944235a64ae876692f1993 2023-07-24 06:11:15,661 INFO [M:0;jenkins-hbase4:44691] regionserver.HStore(1080): Added hdfs://localhost:43327/user/jenkins/test-data/c9728575-ff5d-d6a4-76d0-ffb706990c58/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/957b7a1fc2944235a64ae876692f1993, entries=24, sequenceid=194, filesize=12.4 K 2023-07-24 06:11:15,662 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95179, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=194, compaction requested=false 2023-07-24 06:11:15,664 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:15,664 DEBUG [M:0;jenkins-hbase4:44691] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:15,669 INFO [M:0;jenkins-hbase4:44691] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 06:11:15,669 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:15,670 INFO [M:0;jenkins-hbase4:44691] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44691 2023-07-24 06:11:15,672 DEBUG [M:0;jenkins-hbase4:44691] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44691,1690179071755 already deleted, retry=false 2023-07-24 06:11:15,967 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:15,967 INFO [M:0;jenkins-hbase4:44691] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44691,1690179071755; zookeeper connection closed. 2023-07-24 06:11:15,967 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): master:44691-0x10195f471f30000, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:16,067 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:16,067 INFO [RS:1;jenkins-hbase4:35909] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35909,1690179072147; zookeeper connection closed. 2023-07-24 06:11:16,067 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): regionserver:35909-0x10195f471f30002, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:16,067 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@77c70a6d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@77c70a6d 2023-07-24 06:11:16,068 INFO [Listener at localhost/33861] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 06:11:16,068 WARN [Listener at localhost/33861] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:16,073 INFO [Listener at localhost/33861] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:16,178 WARN [BP-369382319-172.31.14.131-1690179070859 heartbeating to localhost/127.0.0.1:43327] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:16,179 WARN [BP-369382319-172.31.14.131-1690179070859 heartbeating to localhost/127.0.0.1:43327] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-369382319-172.31.14.131-1690179070859 (Datanode Uuid f67a515a-385d-4ef8-a221-e9774b2814a8) service to localhost/127.0.0.1:43327 2023-07-24 06:11:16,179 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/dfs/data/data5/current/BP-369382319-172.31.14.131-1690179070859] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:16,180 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/dfs/data/data6/current/BP-369382319-172.31.14.131-1690179070859] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:16,181 WARN [Listener at localhost/33861] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:16,187 INFO [Listener at localhost/33861] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:16,290 WARN [BP-369382319-172.31.14.131-1690179070859 heartbeating to localhost/127.0.0.1:43327] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:16,290 WARN [BP-369382319-172.31.14.131-1690179070859 heartbeating to localhost/127.0.0.1:43327] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-369382319-172.31.14.131-1690179070859 (Datanode Uuid de68e7e5-08b7-4aa4-b94e-7f1182383163) service to localhost/127.0.0.1:43327 2023-07-24 06:11:16,291 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/dfs/data/data3/current/BP-369382319-172.31.14.131-1690179070859] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:16,291 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/dfs/data/data4/current/BP-369382319-172.31.14.131-1690179070859] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:16,293 WARN [Listener at localhost/33861] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:16,296 INFO [Listener at localhost/33861] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:16,399 WARN [BP-369382319-172.31.14.131-1690179070859 heartbeating to localhost/127.0.0.1:43327] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:16,400 WARN [BP-369382319-172.31.14.131-1690179070859 heartbeating to localhost/127.0.0.1:43327] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-369382319-172.31.14.131-1690179070859 (Datanode Uuid d4729379-b093-4dc7-a6ba-b786b2567f05) service to localhost/127.0.0.1:43327 2023-07-24 06:11:16,400 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/dfs/data/data1/current/BP-369382319-172.31.14.131-1690179070859] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:16,401 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/cluster_712d6e1a-a24c-0a20-8daf-bcdae54cc91f/dfs/data/data2/current/BP-369382319-172.31.14.131-1690179070859] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:16,411 INFO [Listener at localhost/33861] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:16,526 INFO [Listener at localhost/33861] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.log.dir so I do NOT create it in target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af5415df-a673-de2d-60f9-c8cd1a41a4e1/hadoop.tmp.dir so I do NOT create it in target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61, deleteOnExit=true 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/test.cache.data in system properties and HBase conf 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir in system properties and HBase conf 2023-07-24 06:11:16,561 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 06:11:16,562 DEBUG [Listener at localhost/33861] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 06:11:16,562 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/nfs.dump.dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 06:11:16,563 INFO [Listener at localhost/33861] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 06:11:16,567 WARN [Listener at localhost/33861] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 06:11:16,567 WARN [Listener at localhost/33861] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 06:11:16,608 WARN [Listener at localhost/33861] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:16,610 INFO [Listener at localhost/33861] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:16,614 INFO [Listener at localhost/33861] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/Jetty_localhost_39371_hdfs____.25954f/webapp 2023-07-24 06:11:16,623 DEBUG [Listener at localhost/33861-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10195f471f3000a, quorum=127.0.0.1:57631, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 06:11:16,624 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10195f471f3000a, quorum=127.0.0.1:57631, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 06:11:16,707 INFO [Listener at localhost/33861] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39371 2023-07-24 06:11:16,712 WARN [Listener at localhost/33861] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 06:11:16,712 WARN [Listener at localhost/33861] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 06:11:16,754 WARN [Listener at localhost/33169] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:16,774 WARN [Listener at localhost/33169] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:11:16,777 WARN [Listener at localhost/33169] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:16,778 INFO [Listener at localhost/33169] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:16,784 INFO [Listener at localhost/33169] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/Jetty_localhost_36141_datanode____.28gb5a/webapp 2023-07-24 06:11:16,884 INFO [Listener at localhost/33169] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36141 2023-07-24 06:11:16,892 WARN [Listener at localhost/37937] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:16,910 WARN [Listener at localhost/37937] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:11:16,912 WARN [Listener at localhost/37937] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:16,913 INFO [Listener at localhost/37937] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:16,916 INFO [Listener at localhost/37937] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/Jetty_localhost_42599_datanode____1zrkfy/webapp 2023-07-24 06:11:17,007 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd3ba52dfdba8c02d: Processing first storage report for DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd from datanode 2f764714-bd09-4536-9863-ef4b0bd9b729 2023-07-24 06:11:17,007 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd3ba52dfdba8c02d: from storage DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd node DatanodeRegistration(127.0.0.1:37787, datanodeUuid=2f764714-bd09-4536-9863-ef4b0bd9b729, infoPort=33459, infoSecurePort=0, ipcPort=37937, storageInfo=lv=-57;cid=testClusterID;nsid=154065488;c=1690179076570), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:17,007 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd3ba52dfdba8c02d: Processing first storage report for DS-46adc20b-55e1-4ac9-ae78-b38b7d7ce5c1 from datanode 2f764714-bd09-4536-9863-ef4b0bd9b729 2023-07-24 06:11:17,007 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd3ba52dfdba8c02d: from storage DS-46adc20b-55e1-4ac9-ae78-b38b7d7ce5c1 node DatanodeRegistration(127.0.0.1:37787, datanodeUuid=2f764714-bd09-4536-9863-ef4b0bd9b729, infoPort=33459, infoSecurePort=0, ipcPort=37937, storageInfo=lv=-57;cid=testClusterID;nsid=154065488;c=1690179076570), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:17,032 INFO [Listener at localhost/37937] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42599 2023-07-24 06:11:17,039 WARN [Listener at localhost/34391] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:17,060 WARN [Listener at localhost/34391] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 06:11:17,062 WARN [Listener at localhost/34391] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 06:11:17,063 INFO [Listener at localhost/34391] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 06:11:17,066 INFO [Listener at localhost/34391] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/Jetty_localhost_35323_datanode____.7xv8xz/webapp 2023-07-24 06:11:17,145 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x984ecce415320830: Processing first storage report for DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb from datanode 9775e98f-62f4-4e02-b1eb-2dc6187768d1 2023-07-24 06:11:17,145 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x984ecce415320830: from storage DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb node DatanodeRegistration(127.0.0.1:45991, datanodeUuid=9775e98f-62f4-4e02-b1eb-2dc6187768d1, infoPort=38169, infoSecurePort=0, ipcPort=34391, storageInfo=lv=-57;cid=testClusterID;nsid=154065488;c=1690179076570), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:17,145 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x984ecce415320830: Processing first storage report for DS-36b15c31-37b9-4170-af61-d39d2f184240 from datanode 9775e98f-62f4-4e02-b1eb-2dc6187768d1 2023-07-24 06:11:17,145 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x984ecce415320830: from storage DS-36b15c31-37b9-4170-af61-d39d2f184240 node DatanodeRegistration(127.0.0.1:45991, datanodeUuid=9775e98f-62f4-4e02-b1eb-2dc6187768d1, infoPort=38169, infoSecurePort=0, ipcPort=34391, storageInfo=lv=-57;cid=testClusterID;nsid=154065488;c=1690179076570), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:17,172 INFO [Listener at localhost/34391] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35323 2023-07-24 06:11:17,179 WARN [Listener at localhost/36479] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 06:11:17,273 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x413040857e55c544: Processing first storage report for DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66 from datanode 62793cb3-c6e8-4930-a854-e48c5487fc04 2023-07-24 06:11:17,274 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x413040857e55c544: from storage DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66 node DatanodeRegistration(127.0.0.1:38311, datanodeUuid=62793cb3-c6e8-4930-a854-e48c5487fc04, infoPort=33643, infoSecurePort=0, ipcPort=36479, storageInfo=lv=-57;cid=testClusterID;nsid=154065488;c=1690179076570), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:17,274 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x413040857e55c544: Processing first storage report for DS-c99b9805-b12b-4619-b80d-ce91d4ddaef8 from datanode 62793cb3-c6e8-4930-a854-e48c5487fc04 2023-07-24 06:11:17,274 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x413040857e55c544: from storage DS-c99b9805-b12b-4619-b80d-ce91d4ddaef8 node DatanodeRegistration(127.0.0.1:38311, datanodeUuid=62793cb3-c6e8-4930-a854-e48c5487fc04, infoPort=33643, infoSecurePort=0, ipcPort=36479, storageInfo=lv=-57;cid=testClusterID;nsid=154065488;c=1690179076570), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 06:11:17,285 DEBUG [Listener at localhost/36479] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb 2023-07-24 06:11:17,287 INFO [Listener at localhost/36479] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/zookeeper_0, clientPort=57158, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 06:11:17,289 INFO [Listener at localhost/36479] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57158 2023-07-24 06:11:17,289 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,290 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,308 INFO [Listener at localhost/36479] util.FSUtils(471): Created version file at hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e with version=8 2023-07-24 06:11:17,309 INFO [Listener at localhost/36479] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41501/user/jenkins/test-data/a103f42c-322b-f107-0150-de32d215fc50/hbase-staging 2023-07-24 06:11:17,309 DEBUG [Listener at localhost/36479] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 06:11:17,310 DEBUG [Listener at localhost/36479] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 06:11:17,310 DEBUG [Listener at localhost/36479] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 06:11:17,310 DEBUG [Listener at localhost/36479] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 06:11:17,310 INFO [Listener at localhost/36479] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:17,311 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,311 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,311 INFO [Listener at localhost/36479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:17,311 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,311 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:17,311 INFO [Listener at localhost/36479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:17,313 INFO [Listener at localhost/36479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43839 2023-07-24 06:11:17,314 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,315 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,315 INFO [Listener at localhost/36479] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43839 connecting to ZooKeeper ensemble=127.0.0.1:57158 2023-07-24 06:11:17,323 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:438390x0, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:17,324 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43839-0x10195f487a90000 connected 2023-07-24 06:11:17,342 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:17,343 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:17,343 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:17,345 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43839 2023-07-24 06:11:17,345 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43839 2023-07-24 06:11:17,346 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43839 2023-07-24 06:11:17,348 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43839 2023-07-24 06:11:17,349 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43839 2023-07-24 06:11:17,351 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:17,351 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:17,351 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:17,352 INFO [Listener at localhost/36479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 06:11:17,352 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:17,352 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:17,352 INFO [Listener at localhost/36479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:17,352 INFO [Listener at localhost/36479] http.HttpServer(1146): Jetty bound to port 44749 2023-07-24 06:11:17,352 INFO [Listener at localhost/36479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:17,357 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,357 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41b87c61{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:17,357 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,358 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64f11582{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:17,471 INFO [Listener at localhost/36479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:17,472 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:17,473 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:17,473 INFO [Listener at localhost/36479] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:11:17,474 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,474 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7332fc65{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/jetty-0_0_0_0-44749-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7648234823866562941/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 06:11:17,476 INFO [Listener at localhost/36479] server.AbstractConnector(333): Started ServerConnector@b94c84d{HTTP/1.1, (http/1.1)}{0.0.0.0:44749} 2023-07-24 06:11:17,476 INFO [Listener at localhost/36479] server.Server(415): Started @43077ms 2023-07-24 06:11:17,476 INFO [Listener at localhost/36479] master.HMaster(444): hbase.rootdir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e, hbase.cluster.distributed=false 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:17,492 INFO [Listener at localhost/36479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:17,494 INFO [Listener at localhost/36479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35855 2023-07-24 06:11:17,495 INFO [Listener at localhost/36479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:17,496 DEBUG [Listener at localhost/36479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:17,496 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,497 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,498 INFO [Listener at localhost/36479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35855 connecting to ZooKeeper ensemble=127.0.0.1:57158 2023-07-24 06:11:17,502 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:358550x0, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:17,504 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35855-0x10195f487a90001 connected 2023-07-24 06:11:17,504 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:17,504 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:17,505 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:17,509 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35855 2023-07-24 06:11:17,509 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35855 2023-07-24 06:11:17,510 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35855 2023-07-24 06:11:17,514 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35855 2023-07-24 06:11:17,515 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35855 2023-07-24 06:11:17,516 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:17,516 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:17,516 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:17,517 INFO [Listener at localhost/36479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:17,517 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:17,517 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:17,517 INFO [Listener at localhost/36479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:17,518 INFO [Listener at localhost/36479] http.HttpServer(1146): Jetty bound to port 42451 2023-07-24 06:11:17,518 INFO [Listener at localhost/36479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:17,519 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,519 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@304cae86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:17,520 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,520 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@45e52e99{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:17,636 INFO [Listener at localhost/36479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:17,637 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:17,637 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:17,638 INFO [Listener at localhost/36479] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:11:17,639 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,639 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@62cd2c99{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/jetty-0_0_0_0-42451-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1406077250015883087/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:17,641 INFO [Listener at localhost/36479] server.AbstractConnector(333): Started ServerConnector@54f6b69{HTTP/1.1, (http/1.1)}{0.0.0.0:42451} 2023-07-24 06:11:17,641 INFO [Listener at localhost/36479] server.Server(415): Started @43242ms 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:17,653 INFO [Listener at localhost/36479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:17,655 INFO [Listener at localhost/36479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37149 2023-07-24 06:11:17,656 INFO [Listener at localhost/36479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:17,656 DEBUG [Listener at localhost/36479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:17,657 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,658 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,659 INFO [Listener at localhost/36479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37149 connecting to ZooKeeper ensemble=127.0.0.1:57158 2023-07-24 06:11:17,662 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:371490x0, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:17,663 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37149-0x10195f487a90002 connected 2023-07-24 06:11:17,663 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:17,664 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:17,664 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:17,665 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37149 2023-07-24 06:11:17,665 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37149 2023-07-24 06:11:17,665 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37149 2023-07-24 06:11:17,665 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37149 2023-07-24 06:11:17,665 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37149 2023-07-24 06:11:17,667 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:17,667 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:17,667 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:17,668 INFO [Listener at localhost/36479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:17,668 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:17,668 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:17,668 INFO [Listener at localhost/36479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:17,668 INFO [Listener at localhost/36479] http.HttpServer(1146): Jetty bound to port 33833 2023-07-24 06:11:17,669 INFO [Listener at localhost/36479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:17,672 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,672 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@533b3132{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:17,672 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,672 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1dc4335c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:17,784 INFO [Listener at localhost/36479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:17,785 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:17,785 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:17,785 INFO [Listener at localhost/36479] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 06:11:17,786 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,787 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e992920{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/jetty-0_0_0_0-33833-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1394708297317449387/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:17,789 INFO [Listener at localhost/36479] server.AbstractConnector(333): Started ServerConnector@7e6ae85e{HTTP/1.1, (http/1.1)}{0.0.0.0:33833} 2023-07-24 06:11:17,789 INFO [Listener at localhost/36479] server.Server(415): Started @43390ms 2023-07-24 06:11:17,802 INFO [Listener at localhost/36479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:17,803 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,803 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,803 INFO [Listener at localhost/36479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:17,803 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:17,803 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:17,803 INFO [Listener at localhost/36479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:17,805 INFO [Listener at localhost/36479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33281 2023-07-24 06:11:17,805 INFO [Listener at localhost/36479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:17,806 DEBUG [Listener at localhost/36479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:17,807 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,808 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,808 INFO [Listener at localhost/36479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33281 connecting to ZooKeeper ensemble=127.0.0.1:57158 2023-07-24 06:11:17,812 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:332810x0, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:17,813 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:332810x0, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:17,813 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33281-0x10195f487a90003 connected 2023-07-24 06:11:17,814 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:17,814 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:17,814 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33281 2023-07-24 06:11:17,815 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33281 2023-07-24 06:11:17,815 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33281 2023-07-24 06:11:17,818 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33281 2023-07-24 06:11:17,818 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33281 2023-07-24 06:11:17,820 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:17,820 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:17,820 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:17,821 INFO [Listener at localhost/36479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:17,821 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:17,821 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:17,821 INFO [Listener at localhost/36479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:17,822 INFO [Listener at localhost/36479] http.HttpServer(1146): Jetty bound to port 33291 2023-07-24 06:11:17,822 INFO [Listener at localhost/36479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:17,825 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,825 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24c7b503{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:17,826 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,826 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72fbd169{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:17,940 INFO [Listener at localhost/36479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:17,940 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:17,940 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:17,941 INFO [Listener at localhost/36479] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:11:17,941 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:17,942 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6c97b680{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/jetty-0_0_0_0-33291-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6668678352100966717/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:17,943 INFO [Listener at localhost/36479] server.AbstractConnector(333): Started ServerConnector@19acff39{HTTP/1.1, (http/1.1)}{0.0.0.0:33291} 2023-07-24 06:11:17,944 INFO [Listener at localhost/36479] server.Server(415): Started @43545ms 2023-07-24 06:11:17,946 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:17,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6a0b3282{HTTP/1.1, (http/1.1)}{0.0.0.0:42313} 2023-07-24 06:11:17,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43551ms 2023-07-24 06:11:17,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:17,951 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 06:11:17,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:17,953 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:17,953 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:17,954 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:17,953 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:17,954 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:17,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:11:17,957 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43839,1690179077310 from backup master directory 2023-07-24 06:11:17,957 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:11:17,958 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:17,958 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:17,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:17,958 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 06:11:17,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/hbase.id with ID: 6894efff-4eac-4326-b1de-20b1b26bc674 2023-07-24 06:11:17,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:17,991 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:18,003 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x392f15ad to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:18,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d858e98, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:18,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:18,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 06:11:18,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:18,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store-tmp 2023-07-24 06:11:18,019 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:18,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 06:11:18,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:18,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:18,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 06:11:18,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:18,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:18,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:18,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/WALs/jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:18,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43839%2C1690179077310, suffix=, logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/WALs/jenkins-hbase4.apache.org,43839,1690179077310, archiveDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/oldWALs, maxLogs=10 2023-07-24 06:11:18,039 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK] 2023-07-24 06:11:18,040 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK] 2023-07-24 06:11:18,041 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK] 2023-07-24 06:11:18,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/WALs/jenkins-hbase4.apache.org,43839,1690179077310/jenkins-hbase4.apache.org%2C43839%2C1690179077310.1690179078023 2023-07-24 06:11:18,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK], DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK], DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK]] 2023-07-24 06:11:18,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:18,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:18,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:18,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:18,046 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:18,051 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 06:11:18,051 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 06:11:18,052 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:18,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:18,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 06:11:18,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:18,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9934356160, jitterRate=-0.07479098439216614}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:18,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:18,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 06:11:18,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 06:11:18,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 06:11:18,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 06:11:18,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 06:11:18,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 06:11:18,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 06:11:18,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 06:11:18,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 06:11:18,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 06:11:18,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 06:11:18,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 06:11:18,078 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:18,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 06:11:18,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 06:11:18,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 06:11:18,081 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:18,081 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:18,081 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:18,081 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:18,082 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:18,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43839,1690179077310, sessionid=0x10195f487a90000, setting cluster-up flag (Was=false) 2023-07-24 06:11:18,087 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:18,092 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 06:11:18,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:18,097 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:18,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 06:11:18,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:18,105 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.hbase-snapshot/.tmp 2023-07-24 06:11:18,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 06:11:18,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 06:11:18,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 06:11:18,108 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:18,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 06:11:18,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 06:11:18,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 06:11:18,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 06:11:18,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 06:11:18,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 06:11:18,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:18,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:18,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:18,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 06:11:18,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 06:11:18,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:18,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690179108131 2023-07-24 06:11:18,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 06:11:18,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 06:11:18,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 06:11:18,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 06:11:18,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 06:11:18,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 06:11:18,132 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 06:11:18,132 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 06:11:18,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,133 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 06:11:18,133 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 06:11:18,133 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 06:11:18,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 06:11:18,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 06:11:18,134 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:18,134 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179078134,5,FailOnTimeoutGroup] 2023-07-24 06:11:18,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179078134,5,FailOnTimeoutGroup] 2023-07-24 06:11:18,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 06:11:18,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,146 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(951): ClusterId : 6894efff-4eac-4326-b1de-20b1b26bc674 2023-07-24 06:11:18,152 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:18,155 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(951): ClusterId : 6894efff-4eac-4326-b1de-20b1b26bc674 2023-07-24 06:11:18,156 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:18,156 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:18,163 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:18,164 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(951): ClusterId : 6894efff-4eac-4326-b1de-20b1b26bc674 2023-07-24 06:11:18,166 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:18,167 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:18,167 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:18,169 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:18,171 DEBUG [RS:0;jenkins-hbase4:35855] zookeeper.ReadOnlyZKClient(139): Connect 0x47510a59 to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:18,171 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:18,173 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:18,173 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:18,173 DEBUG [RS:1;jenkins-hbase4:37149] zookeeper.ReadOnlyZKClient(139): Connect 0x0be04482 to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:18,180 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:18,181 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:18,182 DEBUG [RS:2;jenkins-hbase4:33281] zookeeper.ReadOnlyZKClient(139): Connect 0x33695ab7 to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:18,182 DEBUG [RS:0;jenkins-hbase4:35855] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4996fcc3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:18,182 DEBUG [RS:0;jenkins-hbase4:35855] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4da96129, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:18,182 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:18,182 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e 2023-07-24 06:11:18,183 DEBUG [RS:1;jenkins-hbase4:37149] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1aa9e54c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:18,183 DEBUG [RS:1;jenkins-hbase4:37149] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3aa1c826, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:18,198 DEBUG [RS:2;jenkins-hbase4:33281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3daea2aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:18,198 DEBUG [RS:2;jenkins-hbase4:33281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@399131c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:18,200 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35855 2023-07-24 06:11:18,200 INFO [RS:0;jenkins-hbase4:35855] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:18,200 INFO [RS:0;jenkins-hbase4:35855] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:18,200 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:18,200 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43839,1690179077310 with isa=jenkins-hbase4.apache.org/172.31.14.131:35855, startcode=1690179077491 2023-07-24 06:11:18,201 DEBUG [RS:0;jenkins-hbase4:35855] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:18,204 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37149 2023-07-24 06:11:18,204 INFO [RS:1;jenkins-hbase4:37149] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:18,204 INFO [RS:1;jenkins-hbase4:37149] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:18,204 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:18,205 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43839,1690179077310 with isa=jenkins-hbase4.apache.org/172.31.14.131:37149, startcode=1690179077652 2023-07-24 06:11:18,205 DEBUG [RS:1;jenkins-hbase4:37149] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:18,209 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46211, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:18,212 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43839] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,212 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48347, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:18,212 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:18,212 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:33281 2023-07-24 06:11:18,213 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e 2023-07-24 06:11:18,213 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43839] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,213 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 06:11:18,214 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:18,214 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 06:11:18,213 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33169 2023-07-24 06:11:18,214 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44749 2023-07-24 06:11:18,214 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e 2023-07-24 06:11:18,214 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33169 2023-07-24 06:11:18,214 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44749 2023-07-24 06:11:18,214 INFO [RS:2;jenkins-hbase4:33281] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:18,215 INFO [RS:2;jenkins-hbase4:33281] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:18,215 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:18,220 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:18,220 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:18,220 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43839,1690179077310 with isa=jenkins-hbase4.apache.org/172.31.14.131:33281, startcode=1690179077802 2023-07-24 06:11:18,221 DEBUG [RS:0;jenkins-hbase4:35855] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,221 DEBUG [RS:2;jenkins-hbase4:33281] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:18,221 DEBUG [RS:1;jenkins-hbase4:37149] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,221 WARN [RS:0;jenkins-hbase4:35855] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:18,221 WARN [RS:1;jenkins-hbase4:37149] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:18,221 INFO [RS:0;jenkins-hbase4:35855] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:18,221 INFO [RS:1;jenkins-hbase4:37149] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:18,221 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,221 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,222 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32853, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:18,223 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43839] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,223 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:18,223 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 06:11:18,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 06:11:18,232 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e 2023-07-24 06:11:18,232 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33169 2023-07-24 06:11:18,233 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44749 2023-07-24 06:11:18,234 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35855,1690179077491] 2023-07-24 06:11:18,234 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37149,1690179077652] 2023-07-24 06:11:18,236 DEBUG [RS:2;jenkins-hbase4:33281] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,236 WARN [RS:2;jenkins-hbase4:33281] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:18,236 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/info 2023-07-24 06:11:18,236 INFO [RS:2;jenkins-hbase4:33281] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:18,236 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,236 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 06:11:18,236 DEBUG [RS:0;jenkins-hbase4:35855] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,237 DEBUG [RS:0;jenkins-hbase4:35855] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,238 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:18,238 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,240 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 06:11:18,241 DEBUG [RS:0;jenkins-hbase4:35855] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,241 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33281,1690179077802] 2023-07-24 06:11:18,242 DEBUG [RS:1;jenkins-hbase4:37149] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,242 DEBUG [RS:0;jenkins-hbase4:35855] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:18,242 INFO [RS:0;jenkins-hbase4:35855] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:18,243 DEBUG [RS:2;jenkins-hbase4:33281] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,244 INFO [RS:0;jenkins-hbase4:35855] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:18,244 DEBUG [RS:1;jenkins-hbase4:37149] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,244 DEBUG [RS:2;jenkins-hbase4:33281] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,244 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:11:18,244 DEBUG [RS:1;jenkins-hbase4:37149] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,244 DEBUG [RS:2;jenkins-hbase4:33281] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,244 INFO [RS:0;jenkins-hbase4:35855] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:18,244 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,244 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 06:11:18,245 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:18,245 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:18,246 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:18,246 INFO [RS:1;jenkins-hbase4:37149] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:18,246 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,246 INFO [RS:2;jenkins-hbase4:33281] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:18,246 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 06:11:18,246 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,247 DEBUG [RS:0;jenkins-hbase4:35855] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,248 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/table 2023-07-24 06:11:18,249 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 06:11:18,249 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,252 INFO [RS:1;jenkins-hbase4:37149] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:18,253 INFO [RS:2;jenkins-hbase4:33281] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:18,254 INFO [RS:1;jenkins-hbase4:37149] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:18,254 INFO [RS:2;jenkins-hbase4:33281] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:18,254 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,254 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,254 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,255 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740 2023-07-24 06:11:18,255 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:18,255 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,255 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,255 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740 2023-07-24 06:11:18,257 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 06:11:18,258 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 06:11:18,258 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:18,261 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,262 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,262 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,262 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:18,263 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:1;jenkins-hbase4:37149] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,263 DEBUG [RS:2;jenkins-hbase4:33281] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:18,269 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:18,271 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,271 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,271 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,271 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12073329600, jitterRate=0.12441644072532654}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 06:11:18,271 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 06:11:18,271 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 06:11:18,271 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 06:11:18,271 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 06:11:18,271 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 06:11:18,271 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 06:11:18,274 INFO [RS:0;jenkins-hbase4:35855] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:18,274 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35855,1690179077491-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,274 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,274 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,275 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,279 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:18,279 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 06:11:18,280 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 06:11:18,280 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 06:11:18,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 06:11:18,281 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 06:11:18,282 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 06:11:18,289 INFO [RS:0;jenkins-hbase4:35855] regionserver.Replication(203): jenkins-hbase4.apache.org,35855,1690179077491 started 2023-07-24 06:11:18,289 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35855,1690179077491, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35855, sessionid=0x10195f487a90001 2023-07-24 06:11:18,289 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:18,289 DEBUG [RS:0;jenkins-hbase4:35855] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,289 DEBUG [RS:0;jenkins-hbase4:35855] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35855,1690179077491' 2023-07-24 06:11:18,289 DEBUG [RS:0;jenkins-hbase4:35855] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:18,290 DEBUG [RS:0;jenkins-hbase4:35855] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:18,290 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:18,290 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:18,290 DEBUG [RS:0;jenkins-hbase4:35855] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:18,290 DEBUG [RS:0;jenkins-hbase4:35855] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35855,1690179077491' 2023-07-24 06:11:18,290 DEBUG [RS:0;jenkins-hbase4:35855] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:18,290 INFO [RS:1;jenkins-hbase4:37149] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:18,291 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37149,1690179077652-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,291 DEBUG [RS:0;jenkins-hbase4:35855] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:18,292 DEBUG [RS:0;jenkins-hbase4:35855] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:18,292 INFO [RS:0;jenkins-hbase4:35855] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:11:18,292 INFO [RS:0;jenkins-hbase4:35855] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:11:18,295 INFO [RS:2;jenkins-hbase4:33281] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:18,295 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33281,1690179077802-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,309 INFO [RS:1;jenkins-hbase4:37149] regionserver.Replication(203): jenkins-hbase4.apache.org,37149,1690179077652 started 2023-07-24 06:11:18,309 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37149,1690179077652, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37149, sessionid=0x10195f487a90002 2023-07-24 06:11:18,309 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:18,309 DEBUG [RS:1;jenkins-hbase4:37149] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,309 DEBUG [RS:1;jenkins-hbase4:37149] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37149,1690179077652' 2023-07-24 06:11:18,309 DEBUG [RS:1;jenkins-hbase4:37149] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:18,310 DEBUG [RS:1;jenkins-hbase4:37149] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:18,310 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:18,310 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:18,310 DEBUG [RS:1;jenkins-hbase4:37149] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,310 DEBUG [RS:1;jenkins-hbase4:37149] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37149,1690179077652' 2023-07-24 06:11:18,310 DEBUG [RS:1;jenkins-hbase4:37149] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:18,311 DEBUG [RS:1;jenkins-hbase4:37149] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:18,311 DEBUG [RS:1;jenkins-hbase4:37149] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:18,311 INFO [RS:1;jenkins-hbase4:37149] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:11:18,311 INFO [RS:1;jenkins-hbase4:37149] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:11:18,314 INFO [RS:2;jenkins-hbase4:33281] regionserver.Replication(203): jenkins-hbase4.apache.org,33281,1690179077802 started 2023-07-24 06:11:18,314 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33281,1690179077802, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33281, sessionid=0x10195f487a90003 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33281,1690179077802' 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33281,1690179077802' 2023-07-24 06:11:18,315 DEBUG [RS:2;jenkins-hbase4:33281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:18,316 DEBUG [RS:2;jenkins-hbase4:33281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:18,316 DEBUG [RS:2;jenkins-hbase4:33281] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:18,316 INFO [RS:2;jenkins-hbase4:33281] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:11:18,316 INFO [RS:2;jenkins-hbase4:33281] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:11:18,394 INFO [RS:0;jenkins-hbase4:35855] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35855%2C1690179077491, suffix=, logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,35855,1690179077491, archiveDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs, maxLogs=32 2023-07-24 06:11:18,418 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK] 2023-07-24 06:11:18,418 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK] 2023-07-24 06:11:18,419 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK] 2023-07-24 06:11:18,419 INFO [RS:2;jenkins-hbase4:33281] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33281%2C1690179077802, suffix=, logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,33281,1690179077802, archiveDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs, maxLogs=32 2023-07-24 06:11:18,419 INFO [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37149%2C1690179077652, suffix=, logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,37149,1690179077652, archiveDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs, maxLogs=32 2023-07-24 06:11:18,432 DEBUG [jenkins-hbase4:43839] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 06:11:18,433 DEBUG [jenkins-hbase4:43839] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:18,433 DEBUG [jenkins-hbase4:43839] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:18,433 DEBUG [jenkins-hbase4:43839] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:18,433 DEBUG [jenkins-hbase4:43839] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:18,433 DEBUG [jenkins-hbase4:43839] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:18,434 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37149,1690179077652, state=OPENING 2023-07-24 06:11:18,434 INFO [RS:0;jenkins-hbase4:35855] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,35855,1690179077491/jenkins-hbase4.apache.org%2C35855%2C1690179077491.1690179078395 2023-07-24 06:11:18,435 DEBUG [RS:0;jenkins-hbase4:35855] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK], DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK], DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK]] 2023-07-24 06:11:18,437 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 06:11:18,443 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:18,443 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 06:11:18,444 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37149,1690179077652}] 2023-07-24 06:11:18,463 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK] 2023-07-24 06:11:18,463 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK] 2023-07-24 06:11:18,463 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK] 2023-07-24 06:11:18,469 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK] 2023-07-24 06:11:18,470 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK] 2023-07-24 06:11:18,470 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK] 2023-07-24 06:11:18,470 INFO [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,37149,1690179077652/jenkins-hbase4.apache.org%2C37149%2C1690179077652.1690179078419 2023-07-24 06:11:18,474 DEBUG [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK], DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK], DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK]] 2023-07-24 06:11:18,475 INFO [RS:2;jenkins-hbase4:33281] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,33281,1690179077802/jenkins-hbase4.apache.org%2C33281%2C1690179077802.1690179078419 2023-07-24 06:11:18,475 DEBUG [RS:2;jenkins-hbase4:33281] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK], DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK], DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK]] 2023-07-24 06:11:18,616 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,616 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:11:18,619 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51938, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:11:18,626 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 06:11:18,632 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 06:11:18,632 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:18,636 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37149%2C1690179077652.meta, suffix=.meta, logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,37149,1690179077652, archiveDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs, maxLogs=32 2023-07-24 06:11:18,673 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK] 2023-07-24 06:11:18,675 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK] 2023-07-24 06:11:18,674 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK] 2023-07-24 06:11:18,679 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,37149,1690179077652/jenkins-hbase4.apache.org%2C37149%2C1690179077652.meta.1690179078637.meta 2023-07-24 06:11:18,679 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK], DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK], DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK]] 2023-07-24 06:11:18,679 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:18,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:11:18,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 06:11:18,680 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 06:11:18,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 06:11:18,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:18,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 06:11:18,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 06:11:18,685 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 06:11:18,686 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/info 2023-07-24 06:11:18,686 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/info 2023-07-24 06:11:18,687 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 06:11:18,688 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,688 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 06:11:18,689 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:11:18,690 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/rep_barrier 2023-07-24 06:11:18,691 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 06:11:18,692 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,692 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 06:11:18,693 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/table 2023-07-24 06:11:18,693 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/table 2023-07-24 06:11:18,693 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 06:11:18,694 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:18,699 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740 2023-07-24 06:11:18,700 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740 2023-07-24 06:11:18,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 06:11:18,706 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 06:11:18,707 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9673455200, jitterRate=-0.09908927977085114}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 06:11:18,707 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 06:11:18,710 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690179078616 2023-07-24 06:11:18,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 06:11:18,717 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 06:11:18,720 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37149,1690179077652, state=OPEN 2023-07-24 06:11:18,721 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 06:11:18,721 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 06:11:18,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 06:11:18,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37149,1690179077652 in 277 msec 2023-07-24 06:11:18,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 06:11:18,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 444 msec 2023-07-24 06:11:18,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 618 msec 2023-07-24 06:11:18,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690179078727, completionTime=-1 2023-07-24 06:11:18,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 06:11:18,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 06:11:18,732 DEBUG [hconnection-0x289ebf18-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:18,735 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51944, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:18,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 06:11:18,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690179138739 2023-07-24 06:11:18,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690179198739 2023-07-24 06:11:18,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 11 msec 2023-07-24 06:11:18,743 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:18,745 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 06:11:18,747 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43839,1690179077310-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43839,1690179077310-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43839,1690179077310-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43839, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 06:11:18,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:18,759 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:18,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 06:11:18,759 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:18,761 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:18,761 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6 empty. 2023-07-24 06:11:18,762 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:18,762 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 06:11:18,768 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:18,769 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 06:11:18,769 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:18,771 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:18,771 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282 empty. 2023-07-24 06:11:18,772 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:18,772 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 06:11:18,797 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:18,799 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e1cf5974bfac51e5ef8438c944013be6, NAME => 'hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp 2023-07-24 06:11:18,802 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:18,803 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8c9e7b795719c4dfa78dc36415600282, NAME => 'hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp 2023-07-24 06:11:18,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:18,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e1cf5974bfac51e5ef8438c944013be6, disabling compactions & flushes 2023-07-24 06:11:18,827 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:18,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:18,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. after waiting 0 ms 2023-07-24 06:11:18,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:18,827 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:18,827 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e1cf5974bfac51e5ef8438c944013be6: 2023-07-24 06:11:18,831 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:18,832 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179078832"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179078832"}]},"ts":"1690179078832"} 2023-07-24 06:11:18,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:18,835 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:18,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8c9e7b795719c4dfa78dc36415600282, disabling compactions & flushes 2023-07-24 06:11:18,835 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:18,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:18,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. after waiting 0 ms 2023-07-24 06:11:18,835 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:18,835 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:18,836 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8c9e7b795719c4dfa78dc36415600282: 2023-07-24 06:11:18,836 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:18,836 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179078836"}]},"ts":"1690179078836"} 2023-07-24 06:11:18,838 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 06:11:18,838 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:18,839 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179078839"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179078839"}]},"ts":"1690179078839"} 2023-07-24 06:11:18,840 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:18,841 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:18,841 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179078841"}]},"ts":"1690179078841"} 2023-07-24 06:11:18,841 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:18,842 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:18,842 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:18,842 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:18,842 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:18,842 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e1cf5974bfac51e5ef8438c944013be6, ASSIGN}] 2023-07-24 06:11:18,844 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e1cf5974bfac51e5ef8438c944013be6, ASSIGN 2023-07-24 06:11:18,844 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 06:11:18,845 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e1cf5974bfac51e5ef8438c944013be6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33281,1690179077802; forceNewPlan=false, retain=false 2023-07-24 06:11:18,848 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:18,849 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:18,849 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:18,849 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:18,849 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:18,849 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8c9e7b795719c4dfa78dc36415600282, ASSIGN}] 2023-07-24 06:11:18,850 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8c9e7b795719c4dfa78dc36415600282, ASSIGN 2023-07-24 06:11:18,850 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8c9e7b795719c4dfa78dc36415600282, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37149,1690179077652; forceNewPlan=false, retain=false 2023-07-24 06:11:18,851 INFO [jenkins-hbase4:43839] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 06:11:18,853 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e1cf5974bfac51e5ef8438c944013be6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:18,853 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179078853"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179078853"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179078853"}]},"ts":"1690179078853"} 2023-07-24 06:11:18,853 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8c9e7b795719c4dfa78dc36415600282, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:18,854 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179078853"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179078853"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179078853"}]},"ts":"1690179078853"} 2023-07-24 06:11:18,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure e1cf5974bfac51e5ef8438c944013be6, server=jenkins-hbase4.apache.org,33281,1690179077802}] 2023-07-24 06:11:18,855 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 8c9e7b795719c4dfa78dc36415600282, server=jenkins-hbase4.apache.org,37149,1690179077652}] 2023-07-24 06:11:19,007 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:19,007 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 06:11:19,009 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 06:11:19,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:19,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8c9e7b795719c4dfa78dc36415600282, NAME => 'hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:19,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:19,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,012 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:19,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1cf5974bfac51e5ef8438c944013be6, NAME => 'hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:19,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 06:11:19,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. service=MultiRowMutationService 2023-07-24 06:11:19,013 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 06:11:19,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:19,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,013 INFO [StoreOpener-8c9e7b795719c4dfa78dc36415600282-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,014 INFO [StoreOpener-e1cf5974bfac51e5ef8438c944013be6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,015 DEBUG [StoreOpener-8c9e7b795719c4dfa78dc36415600282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/info 2023-07-24 06:11:19,015 DEBUG [StoreOpener-8c9e7b795719c4dfa78dc36415600282-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/info 2023-07-24 06:11:19,015 INFO [StoreOpener-8c9e7b795719c4dfa78dc36415600282-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8c9e7b795719c4dfa78dc36415600282 columnFamilyName info 2023-07-24 06:11:19,015 DEBUG [StoreOpener-e1cf5974bfac51e5ef8438c944013be6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/m 2023-07-24 06:11:19,015 DEBUG [StoreOpener-e1cf5974bfac51e5ef8438c944013be6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/m 2023-07-24 06:11:19,016 INFO [StoreOpener-e1cf5974bfac51e5ef8438c944013be6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1cf5974bfac51e5ef8438c944013be6 columnFamilyName m 2023-07-24 06:11:19,016 INFO [StoreOpener-8c9e7b795719c4dfa78dc36415600282-1] regionserver.HStore(310): Store=8c9e7b795719c4dfa78dc36415600282/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:19,016 INFO [StoreOpener-e1cf5974bfac51e5ef8438c944013be6-1] regionserver.HStore(310): Store=e1cf5974bfac51e5ef8438c944013be6/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:19,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,020 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:19,020 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:19,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:19,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:19,024 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8c9e7b795719c4dfa78dc36415600282; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11984898240, jitterRate=0.11618062853813171}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:19,024 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e1cf5974bfac51e5ef8438c944013be6; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@33afd9cb, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:19,024 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8c9e7b795719c4dfa78dc36415600282: 2023-07-24 06:11:19,024 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e1cf5974bfac51e5ef8438c944013be6: 2023-07-24 06:11:19,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6., pid=8, masterSystemTime=1690179079007 2023-07-24 06:11:19,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282., pid=9, masterSystemTime=1690179079007 2023-07-24 06:11:19,029 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:19,029 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:19,030 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8c9e7b795719c4dfa78dc36415600282, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:19,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:19,030 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690179079029"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179079029"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179079029"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179079029"}]},"ts":"1690179079029"} 2023-07-24 06:11:19,030 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:19,031 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e1cf5974bfac51e5ef8438c944013be6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:19,031 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690179079031"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179079031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179079031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179079031"}]},"ts":"1690179079031"} 2023-07-24 06:11:19,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 06:11:19,033 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 8c9e7b795719c4dfa78dc36415600282, server=jenkins-hbase4.apache.org,37149,1690179077652 in 177 msec 2023-07-24 06:11:19,034 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 06:11:19,034 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure e1cf5974bfac51e5ef8438c944013be6, server=jenkins-hbase4.apache.org,33281,1690179077802 in 178 msec 2023-07-24 06:11:19,035 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 06:11:19,035 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8c9e7b795719c4dfa78dc36415600282, ASSIGN in 184 msec 2023-07-24 06:11:19,036 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:19,036 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179079036"}]},"ts":"1690179079036"} 2023-07-24 06:11:19,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-24 06:11:19,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e1cf5974bfac51e5ef8438c944013be6, ASSIGN in 192 msec 2023-07-24 06:11:19,037 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:19,037 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179079037"}]},"ts":"1690179079037"} 2023-07-24 06:11:19,037 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 06:11:19,038 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 06:11:19,039 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:19,040 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:19,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 281 msec 2023-07-24 06:11:19,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 297 msec 2023-07-24 06:11:19,048 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:19,049 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53848, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:19,051 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 06:11:19,051 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 06:11:19,055 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:19,055 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:19,058 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:11:19,059 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 06:11:19,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 06:11:19,061 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:19,061 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:19,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 06:11:19,072 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:19,075 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-24 06:11:19,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 06:11:19,084 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:19,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-24 06:11:19,102 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 06:11:19,104 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 06:11:19,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.146sec 2023-07-24 06:11:19,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 06:11:19,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 06:11:19,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 06:11:19,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43839,1690179077310-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 06:11:19,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43839,1690179077310-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 06:11:19,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 06:11:19,157 DEBUG [Listener at localhost/36479] zookeeper.ReadOnlyZKClient(139): Connect 0x7c471bef to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:19,163 DEBUG [Listener at localhost/36479] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30e74a47, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:19,165 DEBUG [hconnection-0x6195d96d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:19,168 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51960, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:19,169 INFO [Listener at localhost/36479] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:19,170 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:19,172 DEBUG [Listener at localhost/36479] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 06:11:19,176 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57022, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 06:11:19,181 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 06:11:19,181 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:19,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 06:11:19,182 DEBUG [Listener at localhost/36479] zookeeper.ReadOnlyZKClient(139): Connect 0x4ef56e43 to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:19,190 DEBUG [Listener at localhost/36479] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1649841, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:19,190 INFO [Listener at localhost/36479] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57158 2023-07-24 06:11:19,194 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:19,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:19,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:19,200 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10195f487a9000a connected 2023-07-24 06:11:19,204 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 06:11:19,221 INFO [Listener at localhost/36479] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 06:11:19,222 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:19,222 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:19,222 INFO [Listener at localhost/36479] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 06:11:19,222 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 06:11:19,222 INFO [Listener at localhost/36479] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 06:11:19,222 INFO [Listener at localhost/36479] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 06:11:19,223 INFO [Listener at localhost/36479] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42697 2023-07-24 06:11:19,223 INFO [Listener at localhost/36479] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 06:11:19,225 DEBUG [Listener at localhost/36479] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 06:11:19,226 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:19,227 INFO [Listener at localhost/36479] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 06:11:19,227 INFO [Listener at localhost/36479] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42697 connecting to ZooKeeper ensemble=127.0.0.1:57158 2023-07-24 06:11:19,232 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:426970x0, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 06:11:19,233 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(162): regionserver:426970x0, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 06:11:19,234 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42697-0x10195f487a9000b connected 2023-07-24 06:11:19,235 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(162): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 06:11:19,235 DEBUG [Listener at localhost/36479] zookeeper.ZKUtil(164): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 06:11:19,238 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42697 2023-07-24 06:11:19,239 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42697 2023-07-24 06:11:19,239 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42697 2023-07-24 06:11:19,242 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42697 2023-07-24 06:11:19,242 DEBUG [Listener at localhost/36479] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42697 2023-07-24 06:11:19,244 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 06:11:19,244 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 06:11:19,244 INFO [Listener at localhost/36479] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 06:11:19,245 INFO [Listener at localhost/36479] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 06:11:19,245 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 06:11:19,245 INFO [Listener at localhost/36479] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 06:11:19,245 INFO [Listener at localhost/36479] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 06:11:19,245 INFO [Listener at localhost/36479] http.HttpServer(1146): Jetty bound to port 45139 2023-07-24 06:11:19,245 INFO [Listener at localhost/36479] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 06:11:19,249 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:19,249 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2288e488{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,AVAILABLE} 2023-07-24 06:11:19,250 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:19,250 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55e0b7e1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 06:11:19,361 INFO [Listener at localhost/36479] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 06:11:19,362 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 06:11:19,362 INFO [Listener at localhost/36479] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 06:11:19,363 INFO [Listener at localhost/36479] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 06:11:19,363 INFO [Listener at localhost/36479] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 06:11:19,364 INFO [Listener at localhost/36479] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@294f019b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/java.io.tmpdir/jetty-0_0_0_0-45139-hbase-server-2_4_18-SNAPSHOT_jar-_-any-831497931544573958/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:19,366 INFO [Listener at localhost/36479] server.AbstractConnector(333): Started ServerConnector@345dd9b3{HTTP/1.1, (http/1.1)}{0.0.0.0:45139} 2023-07-24 06:11:19,366 INFO [Listener at localhost/36479] server.Server(415): Started @44967ms 2023-07-24 06:11:19,368 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(951): ClusterId : 6894efff-4eac-4326-b1de-20b1b26bc674 2023-07-24 06:11:19,371 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 06:11:19,373 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 06:11:19,373 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 06:11:19,375 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 06:11:19,378 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ReadOnlyZKClient(139): Connect 0x2bebdbb8 to 127.0.0.1:57158 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 06:11:19,384 DEBUG [RS:3;jenkins-hbase4:42697] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@252c20b5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 06:11:19,384 DEBUG [RS:3;jenkins-hbase4:42697] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2113b0c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:19,397 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42697 2023-07-24 06:11:19,397 INFO [RS:3;jenkins-hbase4:42697] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 06:11:19,397 INFO [RS:3;jenkins-hbase4:42697] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 06:11:19,397 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 06:11:19,398 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43839,1690179077310 with isa=jenkins-hbase4.apache.org/172.31.14.131:42697, startcode=1690179079221 2023-07-24 06:11:19,398 DEBUG [RS:3;jenkins-hbase4:42697] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 06:11:19,401 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34307, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 06:11:19,401 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43839] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,401 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 06:11:19,402 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e 2023-07-24 06:11:19,402 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33169 2023-07-24 06:11:19,402 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44749 2023-07-24 06:11:19,407 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:19,407 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:19,407 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:19,407 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:19,407 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:19,407 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ZKUtil(162): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,408 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 06:11:19,408 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42697,1690179079221] 2023-07-24 06:11:19,408 WARN [RS:3;jenkins-hbase4:42697] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 06:11:19,408 INFO [RS:3;jenkins-hbase4:42697] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 06:11:19,408 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:19,412 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 06:11:19,412 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:19,412 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:19,412 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:19,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:19,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:19,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:19,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:19,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:19,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,416 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,416 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ZKUtil(162): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:19,416 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ZKUtil(162): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:19,417 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ZKUtil(162): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:19,417 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ZKUtil(162): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,418 DEBUG [RS:3;jenkins-hbase4:42697] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 06:11:19,418 INFO [RS:3;jenkins-hbase4:42697] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 06:11:19,422 INFO [RS:3;jenkins-hbase4:42697] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 06:11:19,424 INFO [RS:3;jenkins-hbase4:42697] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 06:11:19,424 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:19,424 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 06:11:19,426 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:19,426 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,426 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,427 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,428 DEBUG [RS:3;jenkins-hbase4:42697] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 06:11:19,430 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:19,431 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:19,431 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:19,449 INFO [RS:3;jenkins-hbase4:42697] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 06:11:19,449 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42697,1690179079221-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 06:11:19,461 INFO [RS:3;jenkins-hbase4:42697] regionserver.Replication(203): jenkins-hbase4.apache.org,42697,1690179079221 started 2023-07-24 06:11:19,461 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42697,1690179079221, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42697, sessionid=0x10195f487a9000b 2023-07-24 06:11:19,461 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 06:11:19,461 DEBUG [RS:3;jenkins-hbase4:42697] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,461 DEBUG [RS:3;jenkins-hbase4:42697] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42697,1690179079221' 2023-07-24 06:11:19,461 DEBUG [RS:3;jenkins-hbase4:42697] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 06:11:19,461 DEBUG [RS:3;jenkins-hbase4:42697] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 06:11:19,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:19,462 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 06:11:19,462 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 06:11:19,462 DEBUG [RS:3;jenkins-hbase4:42697] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:19,462 DEBUG [RS:3;jenkins-hbase4:42697] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42697,1690179079221' 2023-07-24 06:11:19,462 DEBUG [RS:3;jenkins-hbase4:42697] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 06:11:19,463 DEBUG [RS:3;jenkins-hbase4:42697] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 06:11:19,463 DEBUG [RS:3;jenkins-hbase4:42697] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 06:11:19,463 INFO [RS:3;jenkins-hbase4:42697] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 06:11:19,463 INFO [RS:3;jenkins-hbase4:42697] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 06:11:19,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:19,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:19,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:19,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:19,471 DEBUG [hconnection-0x33138f5e-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:19,484 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:19,490 DEBUG [hconnection-0x33138f5e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 06:11:19,492 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53862, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 06:11:19,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:19,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:19,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:19,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:19,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:57022 deadline: 1690180279497, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:19,498 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:19,500 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:19,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:19,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:19,502 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:19,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:19,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:19,568 INFO [RS:3;jenkins-hbase4:42697] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42697%2C1690179079221, suffix=, logDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,42697,1690179079221, archiveDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs, maxLogs=32 2023-07-24 06:11:19,579 INFO [Listener at localhost/36479] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=559 (was 502) Potentially hanging thread: Listener at localhost/33861-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5fda26cd java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x0be04482-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@212b1f9f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57631@0x3d692739 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp539864118-2537 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-793892752_17 at /127.0.0.1:47134 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x289ebf18-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1413405032-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33169 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_418177995_17 at /127.0.0.1:33026 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1691144299-2267-acceptor-0@7b28b37b-ServerConnector@6a0b3282{HTTP/1.1, (http/1.1)}{0.0.0.0:42313} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e-prefix:jenkins-hbase4.apache.org,35855,1690179077491 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:35855Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2195 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:43327 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 301564288@qtp-623887630-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x47510a59-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-537-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data1/current/BP-1053864684-172.31.14.131-1690179076570 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@423f842b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server idle connection scanner for port 33169 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/36479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 1 on default port 36479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp539864118-2534 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:43327 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp539864118-2536 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:43327 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x33695ab7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:43327 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1799bcfb-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-6994f8ba-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x289ebf18-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33169 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1413405032-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1691144299-2265 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1691144299-2263 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 729984887@qtp-754566410-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@4e0e1c83 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-535-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x47510a59 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33281Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:33281 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1951542644@qtp-2011965437-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:40806 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_418177995_17 at /127.0.0.1:32920 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x2bebdbb8-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:37149-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@12dc1ef4[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x4ef56e43 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x2bebdbb8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179078134 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1970288f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data2/current/BP-1053864684-172.31.14.131-1690179076570 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1038439160_17 at /127.0.0.1:47202 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_418177995_17 at /127.0.0.1:40816 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e-prefix:jenkins-hbase4.apache.org,33281,1690179077802 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2194 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data5/current/BP-1053864684-172.31.14.131-1690179076570 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:47220 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33861-SendThread(127.0.0.1:57631) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: Listener at localhost/36479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/36479-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x4ef56e43-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1562069632-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6344f3ff-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1038439160_17 at /127.0.0.1:40794 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e-prefix:jenkins-hbase4.apache.org,37149,1690179077652 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1052710344-2168 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7ea965a4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2193-acceptor-0@54251d90-ServerConnector@54f6b69{HTTP/1.1, (http/1.1)}{0.0.0.0:42451} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33169 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Session-HouseKeeper-6499b8a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost/36479-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@db6287f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:43327 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37149Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57631@0x3d692739-SendThread(127.0.0.1:57631) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:40830 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2632374-2192 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44691,1690179071755 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data3/current/BP-1053864684-172.31.14.131-1690179076570 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1413405032-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43327 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 37937 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@3e741ffd[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@d0a1eb4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x289ebf18-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 36479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1507228613@qtp-1603809289-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36141 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1052710344-2166 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x47510a59-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/36479.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x289ebf18-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1052710344-2164 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@da39e4e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1413405032-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1052710344-2167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x33138f5e-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1052710344-2161 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-28beacb7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 571331648@qtp-2011965437-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39371 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1038439160_17 at /127.0.0.1:33002 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:43327 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2253-acceptor-0@9e3e8ed-ServerConnector@19acff39{HTTP/1.1, (http/1.1)}{0.0.0.0:33291} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x392f15ad-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp539864118-2532 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1052710344-2163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1413405032-2223-acceptor-0@489dfd78-ServerConnector@7e6ae85e{HTTP/1.1, (http/1.1)}{0.0.0.0:33833} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33169 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 3 on default port 37937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x0be04482 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_418177995_17 at /127.0.0.1:47216 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1691144299-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:37149 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp539864118-2530 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x7c471bef-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x33138f5e-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1838cb6 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1413405032-2222 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x7c471bef sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:47210 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:33169 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-793892752_17 at /127.0.0.1:40762 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1470417251) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33169 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data4/current/BP-1053864684-172.31.14.131-1690179076570 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34391 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1691144299-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x7c471bef-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x392f15ad-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1413405032-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:33169 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1037342502@qtp-623887630-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42599 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp539864118-2535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@49801758 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-793892752_17 at /127.0.0.1:47160 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179078134 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Listener at localhost/36479-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: M:0;jenkins-hbase4:43839 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1691144299-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:33281-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35855-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 739416838@qtp-1603809289-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: jenkins-hbase4:42697Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp539864118-2533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data6/current/BP-1053864684-172.31.14.131-1690179076570 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42697-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@5998d79e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35855 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2198 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 37937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33169 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2196 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x4ef56e43-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6195d96d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1562069632-2252 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:43327 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1299681841@qtp-754566410-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:33169 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36479 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x289ebf18-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:57158): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x33695ab7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/36479.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42697 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/36479.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1052710344-2162-acceptor-0@87e9aea-ServerConnector@b94c84d{HTTP/1.1, (http/1.1)}{0.0.0.0:44749} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1052710344-2165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43839,1690179077310 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x2bebdbb8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 34391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 36479 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e-prefix:jenkins-hbase4.apache.org,37149,1690179077652.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 33169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x392f15ad sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/578922434.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x289ebf18-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36479-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-793892752_17 at /127.0.0.1:32968 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x33695ab7-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x289ebf18-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:40822 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7fe9939a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x289ebf18-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:33169 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36479-SendThread(127.0.0.1:57158) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:33028 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData-prefix:jenkins-hbase4.apache.org,43839,1690179077310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (141952636) connection to localhost/127.0.0.1:43327 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1413405032-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2632374-2199 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1936396025_17 at /127.0.0.1:33016 [Receiving block BP-1053864684-172.31.14.131-1690179076570:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42697 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57158 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: PacketResponder: BP-1053864684-172.31.14.131-1690179076570:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1691144299-2264 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57158@0x0be04482-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp539864118-2531-acceptor-0@6a629e53-ServerConnector@345dd9b3{HTTP/1.1, (http/1.1)}{0.0.0.0:45139} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1691144299-2266 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1204231330.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57631@0x3d692739-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@57009bbf sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=827 (was 771) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=421 (was 380) - SystemLoadAverage LEAK? -, ProcessCount=175 (was 175), AvailableMemoryMB=7850 (was 8125) 2023-07-24 06:11:19,583 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-24 06:11:19,600 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK] 2023-07-24 06:11:19,600 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK] 2023-07-24 06:11:19,600 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK] 2023-07-24 06:11:19,603 INFO [Listener at localhost/36479] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=558, OpenFileDescriptor=827, MaxFileDescriptor=60000, SystemLoadAverage=421, ProcessCount=175, AvailableMemoryMB=7848 2023-07-24 06:11:19,603 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=558 is superior to 500 2023-07-24 06:11:19,603 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-24 06:11:19,605 INFO [RS:3;jenkins-hbase4:42697] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/WALs/jenkins-hbase4.apache.org,42697,1690179079221/jenkins-hbase4.apache.org%2C42697%2C1690179079221.1690179079569 2023-07-24 06:11:19,606 DEBUG [RS:3;jenkins-hbase4:42697] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38311,DS-4aa83894-3d58-4fb9-94c1-23d0ec383f66,DISK], DatanodeInfoWithStorage[127.0.0.1:45991,DS-7f11aa4e-cf28-464a-9e26-059c1392e4eb,DISK], DatanodeInfoWithStorage[127.0.0.1:37787,DS-50801dc8-2ca1-489f-a8a8-cb6604e939dd,DISK]] 2023-07-24 06:11:19,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:19,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:19,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:19,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:19,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:19,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:19,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:19,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:19,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:19,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:19,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:19,619 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:19,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:19,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:19,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:19,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:19,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:19,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:19,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:19,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:19,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:19,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:57022 deadline: 1690180279630, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:19,631 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:19,633 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:19,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:19,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:19,633 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:19,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:19,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:19,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:19,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 06:11:19,638 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:19,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-24 06:11:19,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:11:19,640 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:19,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:19,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:19,643 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 06:11:19,644 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:19,645 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf empty. 2023-07-24 06:11:19,646 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:19,646 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 06:11:19,659 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-24 06:11:19,660 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => dc590e2b794bbef116127fcda4560aaf, NAME => 't1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp 2023-07-24 06:11:19,673 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:19,673 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing dc590e2b794bbef116127fcda4560aaf, disabling compactions & flushes 2023-07-24 06:11:19,673 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:19,673 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:19,673 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. after waiting 0 ms 2023-07-24 06:11:19,673 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:19,673 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:19,673 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for dc590e2b794bbef116127fcda4560aaf: 2023-07-24 06:11:19,675 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 06:11:19,676 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179079676"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179079676"}]},"ts":"1690179079676"} 2023-07-24 06:11:19,677 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 06:11:19,678 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 06:11:19,678 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179079678"}]},"ts":"1690179079678"} 2023-07-24 06:11:19,679 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-24 06:11:19,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 06:11:19,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 06:11:19,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 06:11:19,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 06:11:19,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 06:11:19,683 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 06:11:19,683 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, ASSIGN}] 2023-07-24 06:11:19,684 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, ASSIGN 2023-07-24 06:11:19,687 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37149,1690179077652; forceNewPlan=false, retain=false 2023-07-24 06:11:19,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:11:19,837 INFO [jenkins-hbase4:43839] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 06:11:19,838 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=dc590e2b794bbef116127fcda4560aaf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:19,838 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179079838"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179079838"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179079838"}]},"ts":"1690179079838"} 2023-07-24 06:11:19,840 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure dc590e2b794bbef116127fcda4560aaf, server=jenkins-hbase4.apache.org,37149,1690179077652}] 2023-07-24 06:11:19,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:11:19,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:19,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dc590e2b794bbef116127fcda4560aaf, NAME => 't1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.', STARTKEY => '', ENDKEY => ''} 2023-07-24 06:11:19,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:19,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 06:11:19,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:19,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:19,997 INFO [StoreOpener-dc590e2b794bbef116127fcda4560aaf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:19,998 DEBUG [StoreOpener-dc590e2b794bbef116127fcda4560aaf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/default/t1/dc590e2b794bbef116127fcda4560aaf/cf1 2023-07-24 06:11:19,998 DEBUG [StoreOpener-dc590e2b794bbef116127fcda4560aaf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/default/t1/dc590e2b794bbef116127fcda4560aaf/cf1 2023-07-24 06:11:19,999 INFO [StoreOpener-dc590e2b794bbef116127fcda4560aaf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dc590e2b794bbef116127fcda4560aaf columnFamilyName cf1 2023-07-24 06:11:19,999 INFO [StoreOpener-dc590e2b794bbef116127fcda4560aaf-1] regionserver.HStore(310): Store=dc590e2b794bbef116127fcda4560aaf/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 06:11:20,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/default/t1/dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/default/t1/dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/default/t1/dc590e2b794bbef116127fcda4560aaf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 06:11:20,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dc590e2b794bbef116127fcda4560aaf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9848515360, jitterRate=-0.08278553187847137}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 06:11:20,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dc590e2b794bbef116127fcda4560aaf: 2023-07-24 06:11:20,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf., pid=14, masterSystemTime=1690179079991 2023-07-24 06:11:20,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:20,010 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:20,011 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=dc590e2b794bbef116127fcda4560aaf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:20,011 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179080011"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690179080011"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690179080011"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690179080011"}]},"ts":"1690179080011"} 2023-07-24 06:11:20,013 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-24 06:11:20,014 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure dc590e2b794bbef116127fcda4560aaf, server=jenkins-hbase4.apache.org,37149,1690179077652 in 172 msec 2023-07-24 06:11:20,015 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 06:11:20,015 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, ASSIGN in 330 msec 2023-07-24 06:11:20,015 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 06:11:20,016 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179080016"}]},"ts":"1690179080016"} 2023-07-24 06:11:20,017 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-24 06:11:20,019 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 06:11:20,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 383 msec 2023-07-24 06:11:20,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 06:11:20,243 INFO [Listener at localhost/36479] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-24 06:11:20,243 DEBUG [Listener at localhost/36479] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-24 06:11:20,244 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,246 INFO [Listener at localhost/36479] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-24 06:11:20,246 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,246 INFO [Listener at localhost/36479] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-24 06:11:20,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 06:11:20,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 06:11:20,250 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 06:11:20,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 06:11:20,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:57022 deadline: 1690179140247, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-24 06:11:20,252 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,253 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-24 06:11:20,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:20,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:20,354 INFO [Listener at localhost/36479] client.HBaseAdmin$15(890): Started disable of t1 2023-07-24 06:11:20,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-24 06:11:20,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-24 06:11:20,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 06:11:20,358 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179080358"}]},"ts":"1690179080358"} 2023-07-24 06:11:20,360 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-24 06:11:20,361 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-24 06:11:20,362 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, UNASSIGN}] 2023-07-24 06:11:20,363 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, UNASSIGN 2023-07-24 06:11:20,363 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=dc590e2b794bbef116127fcda4560aaf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:20,363 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179080363"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690179080363"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690179080363"}]},"ts":"1690179080363"} 2023-07-24 06:11:20,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure dc590e2b794bbef116127fcda4560aaf, server=jenkins-hbase4.apache.org,37149,1690179077652}] 2023-07-24 06:11:20,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 06:11:20,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dc590e2b794bbef116127fcda4560aaf, disabling compactions & flushes 2023-07-24 06:11:20,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:20,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:20,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. after waiting 0 ms 2023-07-24 06:11:20,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:20,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/default/t1/dc590e2b794bbef116127fcda4560aaf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 06:11:20,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf. 2023-07-24 06:11:20,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dc590e2b794bbef116127fcda4560aaf: 2023-07-24 06:11:20,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,526 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=dc590e2b794bbef116127fcda4560aaf, regionState=CLOSED 2023-07-24 06:11:20,527 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690179080526"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690179080526"}]},"ts":"1690179080526"} 2023-07-24 06:11:20,529 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 06:11:20,529 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure dc590e2b794bbef116127fcda4560aaf, server=jenkins-hbase4.apache.org,37149,1690179077652 in 163 msec 2023-07-24 06:11:20,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 06:11:20,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=dc590e2b794bbef116127fcda4560aaf, UNASSIGN in 167 msec 2023-07-24 06:11:20,531 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690179080531"}]},"ts":"1690179080531"} 2023-07-24 06:11:20,533 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-24 06:11:20,537 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-24 06:11:20,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 183 msec 2023-07-24 06:11:20,605 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 06:11:20,605 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 06:11:20,605 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:20,605 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 06:11:20,606 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 06:11:20,606 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 06:11:20,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 06:11:20,660 INFO [Listener at localhost/36479] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-24 06:11:20,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-24 06:11:20,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-24 06:11:20,665 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 06:11:20,666 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-24 06:11:20,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-24 06:11:20,679 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:20,681 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf/cf1, FileablePath, hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf/recovered.edits] 2023-07-24 06:11:20,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:20,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 06:11:20,686 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf/recovered.edits/4.seqid to hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/archive/data/default/t1/dc590e2b794bbef116127fcda4560aaf/recovered.edits/4.seqid 2023-07-24 06:11:20,686 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/.tmp/data/default/t1/dc590e2b794bbef116127fcda4560aaf 2023-07-24 06:11:20,686 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 06:11:20,689 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-24 06:11:20,690 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-24 06:11:20,692 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-24 06:11:20,693 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-24 06:11:20,693 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-24 06:11:20,693 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690179080693"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:20,694 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 06:11:20,694 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => dc590e2b794bbef116127fcda4560aaf, NAME => 't1,,1690179079635.dc590e2b794bbef116127fcda4560aaf.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 06:11:20,694 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-24 06:11:20,694 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690179080694"}]},"ts":"9223372036854775807"} 2023-07-24 06:11:20,696 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-24 06:11:20,697 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 06:11:20,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 36 msec 2023-07-24 06:11:20,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 06:11:20,785 INFO [Listener at localhost/36479] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-24 06:11:20,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:20,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:20,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:20,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:20,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:20,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:20,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:20,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:20,804 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:20,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:20,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:20,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:20,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:20,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:20,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:20,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:57022 deadline: 1690180280814, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:20,815 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:20,818 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,819 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:20,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:20,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:20,838 INFO [Listener at localhost/36479] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=568 (was 558) - Thread LEAK? -, OpenFileDescriptor=835 (was 827) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=421 (was 421), ProcessCount=175 (was 175), AvailableMemoryMB=7842 (was 7848) 2023-07-24 06:11:20,838 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-24 06:11:20,855 INFO [Listener at localhost/36479] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=568, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=421, ProcessCount=175, AvailableMemoryMB=7841 2023-07-24 06:11:20,855 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-24 06:11:20,856 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-24 06:11:20,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:20,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:20,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:20,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:20,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:20,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:20,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:20,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:20,869 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:20,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:20,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:20,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:20,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:20,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:20,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:20,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57022 deadline: 1690180280881, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:20,881 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:20,883 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,884 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:20,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:20,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:20,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 06:11:20,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:20,886 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-24 06:11:20,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 06:11:20,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 06:11:20,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:20,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:20,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:20,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:20,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:20,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:20,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:20,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:20,903 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:20,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:20,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:20,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:20,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:20,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:20,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:20,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57022 deadline: 1690180280914, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:20,915 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:20,916 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,917 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:20,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:20,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:20,939 INFO [Listener at localhost/36479] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570 (was 568) - Thread LEAK? -, OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=421 (was 421), ProcessCount=175 (was 175), AvailableMemoryMB=7839 (was 7841) 2023-07-24 06:11:20,939 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-24 06:11:20,961 INFO [Listener at localhost/36479] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=570, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=421, ProcessCount=175, AvailableMemoryMB=7835 2023-07-24 06:11:20,961 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-24 06:11:20,961 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-24 06:11:20,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:20,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:20,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:20,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:20,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:20,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:20,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:20,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:20,979 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:20,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:20,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:20,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:20,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:20,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:20,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:20,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57022 deadline: 1690180280990, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:20,991 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:20,993 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:20,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,994 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:20,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:20,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:20,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:20,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:20,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:21,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:21,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:21,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:21,015 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:21,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:21,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:21,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:21,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:21,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:21,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57022 deadline: 1690180281024, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:21,025 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:21,027 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:21,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,028 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:21,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:21,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:21,048 INFO [Listener at localhost/36479] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571 (was 570) - Thread LEAK? -, OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=421 (was 421), ProcessCount=175 (was 175), AvailableMemoryMB=7834 (was 7835) 2023-07-24 06:11:21,048 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 06:11:21,068 INFO [Listener at localhost/36479] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=571, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=421, ProcessCount=175, AvailableMemoryMB=7833 2023-07-24 06:11:21,068 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 06:11:21,068 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-24 06:11:21,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:21,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:21,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:21,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:21,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:21,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:21,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:21,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:21,086 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:21,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:21,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:21,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:21,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:21,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:21,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57022 deadline: 1690180281097, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:21,098 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:21,100 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:21,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,101 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:21,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:21,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:21,102 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-24 06:11:21,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-24 06:11:21,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 06:11:21,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 06:11:21,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:21,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 06:11:21,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 06:11:21,124 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:21,127 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-24 06:11:21,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 06:11:21,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 06:11:21,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:21,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:57022 deadline: 1690180281220, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-24 06:11:21,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 06:11:21,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 06:11:21,241 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 06:11:21,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-24 06:11:21,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 06:11:21,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-24 06:11:21,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 06:11:21,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 06:11:21,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 06:11:21,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:21,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 06:11:21,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,358 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 06:11:21,365 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,367 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,368 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 06:11:21,368 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 06:11:21,369 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,371 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 06:11:21,372 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-24 06:11:21,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 06:11:21,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 06:11:21,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 06:11:21,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 06:11:21,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:21,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:21,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:57022 deadline: 1690179141473, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-24 06:11:21,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:21,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:21,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:21,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:21,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:21,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-24 06:11:21,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 06:11:21,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:21,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 06:11:21,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 06:11:21,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 06:11:21,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 06:11:21,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 06:11:21,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 06:11:21,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 06:11:21,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 06:11:21,494 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 06:11:21,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 06:11:21,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 06:11:21,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 06:11:21,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 06:11:21,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 06:11:21,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43839] to rsgroup master 2023-07-24 06:11:21,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 06:11:21,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57022 deadline: 1690180281506, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. 2023-07-24 06:11:21,506 WARN [Listener at localhost/36479] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43839 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 06:11:21,508 INFO [Listener at localhost/36479] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 06:11:21,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 06:11:21,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 06:11:21,509 INFO [Listener at localhost/36479] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33281, jenkins-hbase4.apache.org:35855, jenkins-hbase4.apache.org:37149, jenkins-hbase4.apache.org:42697], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 06:11:21,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 06:11:21,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43839] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 06:11:21,527 INFO [Listener at localhost/36479] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=571 (was 571), OpenFileDescriptor=835 (was 835), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=421 (was 421), ProcessCount=175 (was 175), AvailableMemoryMB=7832 (was 7833) 2023-07-24 06:11:21,527 WARN [Listener at localhost/36479] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 06:11:21,527 INFO [Listener at localhost/36479] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 06:11:21,527 INFO [Listener at localhost/36479] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 06:11:21,527 DEBUG [Listener at localhost/36479] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c471bef to 127.0.0.1:57158 2023-07-24 06:11:21,527 DEBUG [Listener at localhost/36479] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,527 DEBUG [Listener at localhost/36479] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 06:11:21,528 DEBUG [Listener at localhost/36479] util.JVMClusterUtil(257): Found active master hash=291204469, stopped=false 2023-07-24 06:11:21,528 DEBUG [Listener at localhost/36479] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 06:11:21,528 DEBUG [Listener at localhost/36479] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 06:11:21,528 INFO [Listener at localhost/36479] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:21,530 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:21,530 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:21,530 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:21,530 INFO [Listener at localhost/36479] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 06:11:21,530 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:21,530 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 06:11:21,530 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:21,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:21,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:21,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:21,531 DEBUG [Listener at localhost/36479] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x392f15ad to 127.0.0.1:57158 2023-07-24 06:11:21,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:21,531 DEBUG [Listener at localhost/36479] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,531 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 06:11:21,531 INFO [Listener at localhost/36479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35855,1690179077491' ***** 2023-07-24 06:11:21,531 INFO [Listener at localhost/36479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:21,531 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:21,531 INFO [Listener at localhost/36479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37149,1690179077652' ***** 2023-07-24 06:11:21,533 INFO [Listener at localhost/36479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:21,533 INFO [Listener at localhost/36479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33281,1690179077802' ***** 2023-07-24 06:11:21,533 INFO [Listener at localhost/36479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:21,533 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:21,534 INFO [Listener at localhost/36479] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42697,1690179079221' ***** 2023-07-24 06:11:21,535 INFO [Listener at localhost/36479] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 06:11:21,534 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:21,535 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:21,536 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:21,538 INFO [RS:0;jenkins-hbase4:35855] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@62cd2c99{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:21,538 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,539 INFO [RS:1;jenkins-hbase4:37149] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e992920{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:21,539 INFO [RS:3;jenkins-hbase4:42697] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@294f019b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:21,539 INFO [RS:2;jenkins-hbase4:33281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6c97b680{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 06:11:21,539 INFO [RS:0;jenkins-hbase4:35855] server.AbstractConnector(383): Stopped ServerConnector@54f6b69{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:21,539 INFO [RS:0;jenkins-hbase4:35855] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:21,539 INFO [RS:1;jenkins-hbase4:37149] server.AbstractConnector(383): Stopped ServerConnector@7e6ae85e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:21,540 INFO [RS:2;jenkins-hbase4:33281] server.AbstractConnector(383): Stopped ServerConnector@19acff39{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:21,540 INFO [RS:3;jenkins-hbase4:42697] server.AbstractConnector(383): Stopped ServerConnector@345dd9b3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:21,540 INFO [RS:1;jenkins-hbase4:37149] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:21,540 INFO [RS:3;jenkins-hbase4:42697] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:21,540 INFO [RS:0;jenkins-hbase4:35855] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@45e52e99{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:21,540 INFO [RS:2;jenkins-hbase4:33281] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:21,543 INFO [RS:3;jenkins-hbase4:42697] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55e0b7e1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:21,541 INFO [RS:1;jenkins-hbase4:37149] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1dc4335c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:21,543 INFO [RS:2;jenkins-hbase4:33281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72fbd169{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:21,545 INFO [RS:1;jenkins-hbase4:37149] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@533b3132{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:21,543 INFO [RS:0;jenkins-hbase4:35855] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@304cae86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:21,546 INFO [RS:2;jenkins-hbase4:33281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24c7b503{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:21,545 INFO [RS:3;jenkins-hbase4:42697] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2288e488{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:21,546 INFO [RS:2;jenkins-hbase4:33281] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:21,547 INFO [RS:3;jenkins-hbase4:42697] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:21,547 INFO [RS:1;jenkins-hbase4:37149] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:21,547 INFO [RS:2;jenkins-hbase4:33281] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:21,547 INFO [RS:2;jenkins-hbase4:33281] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:21,547 INFO [RS:3;jenkins-hbase4:42697] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:21,547 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(3305): Received CLOSE for e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:21,547 INFO [RS:1;jenkins-hbase4:37149] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:21,547 INFO [RS:0;jenkins-hbase4:35855] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 06:11:21,547 INFO [RS:3;jenkins-hbase4:42697] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:21,547 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:21,547 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:21,547 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:21,547 INFO [RS:1;jenkins-hbase4:37149] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:21,547 DEBUG [RS:3;jenkins-hbase4:42697] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2bebdbb8 to 127.0.0.1:57158 2023-07-24 06:11:21,547 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(3305): Received CLOSE for 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:21,547 DEBUG [RS:3;jenkins-hbase4:42697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,547 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:21,547 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42697,1690179079221; all regions closed. 2023-07-24 06:11:21,548 DEBUG [RS:1;jenkins-hbase4:37149] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0be04482 to 127.0.0.1:57158 2023-07-24 06:11:21,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8c9e7b795719c4dfa78dc36415600282, disabling compactions & flushes 2023-07-24 06:11:21,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:21,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:21,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. after waiting 0 ms 2023-07-24 06:11:21,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:21,548 DEBUG [RS:1;jenkins-hbase4:37149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8c9e7b795719c4dfa78dc36415600282 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-24 06:11:21,548 INFO [RS:1;jenkins-hbase4:37149] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:21,548 INFO [RS:1;jenkins-hbase4:37149] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:21,548 INFO [RS:1;jenkins-hbase4:37149] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:21,548 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 06:11:21,548 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:21,548 DEBUG [RS:2;jenkins-hbase4:33281] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33695ab7 to 127.0.0.1:57158 2023-07-24 06:11:21,548 DEBUG [RS:2;jenkins-hbase4:33281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,548 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 06:11:21,548 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1478): Online Regions={e1cf5974bfac51e5ef8438c944013be6=hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6.} 2023-07-24 06:11:21,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 06:11:21,549 DEBUG [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1504): Waiting on e1cf5974bfac51e5ef8438c944013be6 2023-07-24 06:11:21,548 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 06:11:21,549 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 06:11:21,549 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 8c9e7b795719c4dfa78dc36415600282=hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282.} 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 06:11:21,549 DEBUG [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1504): Waiting on 1588230740, 8c9e7b795719c4dfa78dc36415600282 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e1cf5974bfac51e5ef8438c944013be6, disabling compactions & flushes 2023-07-24 06:11:21,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:21,549 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. after waiting 0 ms 2023-07-24 06:11:21,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:21,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e1cf5974bfac51e5ef8438c944013be6 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-24 06:11:21,550 INFO [RS:0;jenkins-hbase4:35855] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 06:11:21,550 INFO [RS:0;jenkins-hbase4:35855] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 06:11:21,550 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:21,550 DEBUG [RS:0;jenkins-hbase4:35855] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x47510a59 to 127.0.0.1:57158 2023-07-24 06:11:21,550 DEBUG [RS:0;jenkins-hbase4:35855] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,550 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35855,1690179077491; all regions closed. 2023-07-24 06:11:21,550 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 06:11:21,560 DEBUG [RS:3;jenkins-hbase4:42697] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs 2023-07-24 06:11:21,560 INFO [RS:3;jenkins-hbase4:42697] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42697%2C1690179079221:(num 1690179079569) 2023-07-24 06:11:21,560 DEBUG [RS:3;jenkins-hbase4:42697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,560 INFO [RS:3;jenkins-hbase4:42697] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,565 INFO [RS:3;jenkins-hbase4:42697] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:21,565 INFO [RS:3;jenkins-hbase4:42697] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:21,565 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:21,565 INFO [RS:3;jenkins-hbase4:42697] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:21,565 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,565 INFO [RS:3;jenkins-hbase4:42697] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:21,566 INFO [RS:3;jenkins-hbase4:42697] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42697 2023-07-24 06:11:21,568 DEBUG [RS:0;jenkins-hbase4:35855] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs 2023-07-24 06:11:21,568 INFO [RS:0;jenkins-hbase4:35855] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35855%2C1690179077491:(num 1690179078395) 2023-07-24 06:11:21,568 DEBUG [RS:0;jenkins-hbase4:35855] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,568 INFO [RS:0;jenkins-hbase4:35855] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42697,1690179079221 2023-07-24 06:11:21,569 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,569 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42697,1690179079221] 2023-07-24 06:11:21,569 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42697,1690179079221; numProcessing=1 2023-07-24 06:11:21,571 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42697,1690179079221 already deleted, retry=false 2023-07-24 06:11:21,571 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42697,1690179079221 expired; onlineServers=3 2023-07-24 06:11:21,571 INFO [RS:0;jenkins-hbase4:35855] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:21,571 INFO [RS:0;jenkins-hbase4:35855] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:21,571 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:21,571 INFO [RS:0;jenkins-hbase4:35855] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:21,571 INFO [RS:0;jenkins-hbase4:35855] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:21,578 INFO [RS:0;jenkins-hbase4:35855] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35855 2023-07-24 06:11:21,580 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,580 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:21,580 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:21,580 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35855,1690179077491 2023-07-24 06:11:21,580 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,581 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35855,1690179077491] 2023-07-24 06:11:21,581 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35855,1690179077491; numProcessing=2 2023-07-24 06:11:21,581 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/.tmp/info/affe1fa2334646a69c4304b6ac916465 2023-07-24 06:11:21,588 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35855,1690179077491 already deleted, retry=false 2023-07-24 06:11:21,588 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35855,1690179077491 expired; onlineServers=2 2023-07-24 06:11:21,590 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/.tmp/info/140aabffd37b4a41b4907e3f3aa0f5ff 2023-07-24 06:11:21,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/.tmp/m/905550c1caf9447db80c2376c16d0c6d 2023-07-24 06:11:21,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for affe1fa2334646a69c4304b6ac916465 2023-07-24 06:11:21,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/.tmp/info/affe1fa2334646a69c4304b6ac916465 as hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/info/affe1fa2334646a69c4304b6ac916465 2023-07-24 06:11:21,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 140aabffd37b4a41b4907e3f3aa0f5ff 2023-07-24 06:11:21,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 905550c1caf9447db80c2376c16d0c6d 2023-07-24 06:11:21,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/.tmp/m/905550c1caf9447db80c2376c16d0c6d as hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/m/905550c1caf9447db80c2376c16d0c6d 2023-07-24 06:11:21,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for affe1fa2334646a69c4304b6ac916465 2023-07-24 06:11:21,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/info/affe1fa2334646a69c4304b6ac916465, entries=3, sequenceid=9, filesize=5.0 K 2023-07-24 06:11:21,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 8c9e7b795719c4dfa78dc36415600282 in 63ms, sequenceid=9, compaction requested=false 2023-07-24 06:11:21,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 905550c1caf9447db80c2376c16d0c6d 2023-07-24 06:11:21,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/m/905550c1caf9447db80c2376c16d0c6d, entries=12, sequenceid=29, filesize=5.4 K 2023-07-24 06:11:21,619 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/.tmp/rep_barrier/c62f6d5855c14cdb975ba386cd30936f 2023-07-24 06:11:21,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for e1cf5974bfac51e5ef8438c944013be6 in 70ms, sequenceid=29, compaction requested=false 2023-07-24 06:11:21,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/namespace/8c9e7b795719c4dfa78dc36415600282/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 06:11:21,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:21,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8c9e7b795719c4dfa78dc36415600282: 2023-07-24 06:11:21,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690179078758.8c9e7b795719c4dfa78dc36415600282. 2023-07-24 06:11:21,626 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c62f6d5855c14cdb975ba386cd30936f 2023-07-24 06:11:21,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/rsgroup/e1cf5974bfac51e5ef8438c944013be6/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-24 06:11:21,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:21,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:21,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e1cf5974bfac51e5ef8438c944013be6: 2023-07-24 06:11:21,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690179078743.e1cf5974bfac51e5ef8438c944013be6. 2023-07-24 06:11:21,642 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/.tmp/table/6a45db7e59a54dfc8ff54286a053c058 2023-07-24 06:11:21,648 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6a45db7e59a54dfc8ff54286a053c058 2023-07-24 06:11:21,649 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/.tmp/info/140aabffd37b4a41b4907e3f3aa0f5ff as hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/info/140aabffd37b4a41b4907e3f3aa0f5ff 2023-07-24 06:11:21,654 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 140aabffd37b4a41b4907e3f3aa0f5ff 2023-07-24 06:11:21,655 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/info/140aabffd37b4a41b4907e3f3aa0f5ff, entries=22, sequenceid=26, filesize=7.3 K 2023-07-24 06:11:21,655 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/.tmp/rep_barrier/c62f6d5855c14cdb975ba386cd30936f as hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/rep_barrier/c62f6d5855c14cdb975ba386cd30936f 2023-07-24 06:11:21,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c62f6d5855c14cdb975ba386cd30936f 2023-07-24 06:11:21,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/rep_barrier/c62f6d5855c14cdb975ba386cd30936f, entries=1, sequenceid=26, filesize=4.9 K 2023-07-24 06:11:21,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/.tmp/table/6a45db7e59a54dfc8ff54286a053c058 as hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/table/6a45db7e59a54dfc8ff54286a053c058 2023-07-24 06:11:21,669 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6a45db7e59a54dfc8ff54286a053c058 2023-07-24 06:11:21,669 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/table/6a45db7e59a54dfc8ff54286a053c058, entries=6, sequenceid=26, filesize=5.1 K 2023-07-24 06:11:21,670 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 121ms, sequenceid=26, compaction requested=false 2023-07-24 06:11:21,682 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 06:11:21,683 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 06:11:21,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:21,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 06:11:21,684 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 06:11:21,729 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:21,729 INFO [RS:0;jenkins-hbase4:35855] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35855,1690179077491; zookeeper connection closed. 2023-07-24 06:11:21,729 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:35855-0x10195f487a90001, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:21,730 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@c289e11] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@c289e11 2023-07-24 06:11:21,749 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33281,1690179077802; all regions closed. 2023-07-24 06:11:21,749 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37149,1690179077652; all regions closed. 2023-07-24 06:11:21,757 DEBUG [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs 2023-07-24 06:11:21,757 DEBUG [RS:2;jenkins-hbase4:33281] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs 2023-07-24 06:11:21,757 INFO [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37149%2C1690179077652.meta:.meta(num 1690179078637) 2023-07-24 06:11:21,757 INFO [RS:2;jenkins-hbase4:33281] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33281%2C1690179077802:(num 1690179078419) 2023-07-24 06:11:21,757 DEBUG [RS:2;jenkins-hbase4:33281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,757 INFO [RS:2;jenkins-hbase4:33281] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,758 INFO [RS:2;jenkins-hbase4:33281] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:21,758 INFO [RS:2;jenkins-hbase4:33281] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 06:11:21,758 INFO [RS:2;jenkins-hbase4:33281] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 06:11:21,758 INFO [RS:2;jenkins-hbase4:33281] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 06:11:21,758 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:21,760 INFO [RS:2;jenkins-hbase4:33281] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33281 2023-07-24 06:11:21,762 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:21,762 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33281,1690179077802 2023-07-24 06:11:21,762 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,763 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33281,1690179077802] 2023-07-24 06:11:21,764 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33281,1690179077802; numProcessing=3 2023-07-24 06:11:21,766 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33281,1690179077802 already deleted, retry=false 2023-07-24 06:11:21,766 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33281,1690179077802 expired; onlineServers=1 2023-07-24 06:11:21,766 DEBUG [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/oldWALs 2023-07-24 06:11:21,766 INFO [RS:1;jenkins-hbase4:37149] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37149%2C1690179077652:(num 1690179078419) 2023-07-24 06:11:21,766 DEBUG [RS:1;jenkins-hbase4:37149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,766 INFO [RS:1;jenkins-hbase4:37149] regionserver.LeaseManager(133): Closed leases 2023-07-24 06:11:21,767 INFO [RS:1;jenkins-hbase4:37149] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 06:11:21,767 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:21,768 INFO [RS:1;jenkins-hbase4:37149] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37149 2023-07-24 06:11:21,770 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 06:11:21,770 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37149,1690179077652 2023-07-24 06:11:21,771 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37149,1690179077652] 2023-07-24 06:11:21,771 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37149,1690179077652; numProcessing=4 2023-07-24 06:11:21,772 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37149,1690179077652 already deleted, retry=false 2023-07-24 06:11:21,772 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37149,1690179077652 expired; onlineServers=0 2023-07-24 06:11:21,772 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43839,1690179077310' ***** 2023-07-24 06:11:21,772 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 06:11:21,772 DEBUG [M:0;jenkins-hbase4:43839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1564fb86, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 06:11:21,772 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 06:11:21,775 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 06:11:21,775 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 06:11:21,775 INFO [M:0;jenkins-hbase4:43839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7332fc65{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 06:11:21,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 06:11:21,776 INFO [M:0;jenkins-hbase4:43839] server.AbstractConnector(383): Stopped ServerConnector@b94c84d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:21,776 INFO [M:0;jenkins-hbase4:43839] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 06:11:21,776 INFO [M:0;jenkins-hbase4:43839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64f11582{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 06:11:21,777 INFO [M:0;jenkins-hbase4:43839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41b87c61{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/hadoop.log.dir/,STOPPED} 2023-07-24 06:11:21,777 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43839,1690179077310 2023-07-24 06:11:21,777 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43839,1690179077310; all regions closed. 2023-07-24 06:11:21,778 DEBUG [M:0;jenkins-hbase4:43839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 06:11:21,778 INFO [M:0;jenkins-hbase4:43839] master.HMaster(1491): Stopping master jetty server 2023-07-24 06:11:21,778 INFO [M:0;jenkins-hbase4:43839] server.AbstractConnector(383): Stopped ServerConnector@6a0b3282{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 06:11:21,779 DEBUG [M:0;jenkins-hbase4:43839] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 06:11:21,779 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 06:11:21,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179078134] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690179078134,5,FailOnTimeoutGroup] 2023-07-24 06:11:21,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179078134] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690179078134,5,FailOnTimeoutGroup] 2023-07-24 06:11:21,779 DEBUG [M:0;jenkins-hbase4:43839] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 06:11:21,779 INFO [M:0;jenkins-hbase4:43839] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 06:11:21,779 INFO [M:0;jenkins-hbase4:43839] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 06:11:21,780 INFO [M:0;jenkins-hbase4:43839] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 06:11:21,780 DEBUG [M:0;jenkins-hbase4:43839] master.HMaster(1512): Stopping service threads 2023-07-24 06:11:21,780 INFO [M:0;jenkins-hbase4:43839] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 06:11:21,780 ERROR [M:0;jenkins-hbase4:43839] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 06:11:21,780 INFO [M:0;jenkins-hbase4:43839] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 06:11:21,781 DEBUG [M:0;jenkins-hbase4:43839] zookeeper.ZKUtil(398): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 06:11:21,781 WARN [M:0;jenkins-hbase4:43839] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 06:11:21,781 INFO [M:0;jenkins-hbase4:43839] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 06:11:21,781 INFO [M:0;jenkins-hbase4:43839] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 06:11:21,782 DEBUG [M:0;jenkins-hbase4:43839] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 06:11:21,782 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:21,782 DEBUG [M:0;jenkins-hbase4:43839] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:21,782 DEBUG [M:0;jenkins-hbase4:43839] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 06:11:21,782 DEBUG [M:0;jenkins-hbase4:43839] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:21,782 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.19 KB heapSize=90.64 KB 2023-07-24 06:11:21,786 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 06:11:21,798 INFO [M:0;jenkins-hbase4:43839] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.19 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/20a281f8f5ba42909431f79762b57780 2023-07-24 06:11:21,803 DEBUG [M:0;jenkins-hbase4:43839] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/20a281f8f5ba42909431f79762b57780 as hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/20a281f8f5ba42909431f79762b57780 2023-07-24 06:11:21,808 INFO [M:0;jenkins-hbase4:43839] regionserver.HStore(1080): Added hdfs://localhost:33169/user/jenkins/test-data/cd4c0039-9829-729f-85aa-d412e2179b8e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/20a281f8f5ba42909431f79762b57780, entries=22, sequenceid=175, filesize=11.1 K 2023-07-24 06:11:21,809 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegion(2948): Finished flush of dataSize ~76.19 KB/78016, heapSize ~90.63 KB/92800, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=175, compaction requested=false 2023-07-24 06:11:21,815 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 06:11:21,815 DEBUG [M:0;jenkins-hbase4:43839] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 06:11:21,819 INFO [M:0;jenkins-hbase4:43839] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 06:11:21,820 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 06:11:21,820 INFO [M:0;jenkins-hbase4:43839] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43839 2023-07-24 06:11:21,822 DEBUG [M:0;jenkins-hbase4:43839] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43839,1690179077310 already deleted, retry=false 2023-07-24 06:11:21,829 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:21,829 INFO [RS:3;jenkins-hbase4:42697] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42697,1690179079221; zookeeper connection closed. 2023-07-24 06:11:21,829 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:42697-0x10195f487a9000b, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:21,830 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@572c131c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@572c131c 2023-07-24 06:11:22,431 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:22,431 INFO [M:0;jenkins-hbase4:43839] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43839,1690179077310; zookeeper connection closed. 2023-07-24 06:11:22,431 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): master:43839-0x10195f487a90000, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:22,531 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:22,531 INFO [RS:1;jenkins-hbase4:37149] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37149,1690179077652; zookeeper connection closed. 2023-07-24 06:11:22,531 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:37149-0x10195f487a90002, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:22,532 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34e8c105] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34e8c105 2023-07-24 06:11:22,632 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:22,632 INFO [RS:2;jenkins-hbase4:33281] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33281,1690179077802; zookeeper connection closed. 2023-07-24 06:11:22,632 DEBUG [Listener at localhost/36479-EventThread] zookeeper.ZKWatcher(600): regionserver:33281-0x10195f487a90003, quorum=127.0.0.1:57158, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 06:11:22,632 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1e32e45b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1e32e45b 2023-07-24 06:11:22,632 INFO [Listener at localhost/36479] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 06:11:22,633 WARN [Listener at localhost/36479] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:22,641 INFO [Listener at localhost/36479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:22,744 WARN [BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:22,744 WARN [BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053864684-172.31.14.131-1690179076570 (Datanode Uuid 62793cb3-c6e8-4930-a854-e48c5487fc04) service to localhost/127.0.0.1:33169 2023-07-24 06:11:22,745 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data5/current/BP-1053864684-172.31.14.131-1690179076570] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:22,745 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data6/current/BP-1053864684-172.31.14.131-1690179076570] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:22,748 WARN [Listener at localhost/36479] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:22,752 INFO [Listener at localhost/36479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:22,855 WARN [BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:22,855 WARN [BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053864684-172.31.14.131-1690179076570 (Datanode Uuid 9775e98f-62f4-4e02-b1eb-2dc6187768d1) service to localhost/127.0.0.1:33169 2023-07-24 06:11:22,856 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data3/current/BP-1053864684-172.31.14.131-1690179076570] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:22,856 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data4/current/BP-1053864684-172.31.14.131-1690179076570] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:22,857 WARN [Listener at localhost/36479] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 06:11:22,860 INFO [Listener at localhost/36479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:22,963 WARN [BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 06:11:22,963 WARN [BP-1053864684-172.31.14.131-1690179076570 heartbeating to localhost/127.0.0.1:33169] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1053864684-172.31.14.131-1690179076570 (Datanode Uuid 2f764714-bd09-4536-9863-ef4b0bd9b729) service to localhost/127.0.0.1:33169 2023-07-24 06:11:22,964 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data1/current/BP-1053864684-172.31.14.131-1690179076570] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:22,965 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23722801-de79-0912-53ed-53ec49f6b3cb/cluster_5fd7838a-c002-8081-61ee-0520d5b8fc61/dfs/data/data2/current/BP-1053864684-172.31.14.131-1690179076570] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 06:11:22,974 INFO [Listener at localhost/36479] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 06:11:23,089 INFO [Listener at localhost/36479] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 06:11:23,116 INFO [Listener at localhost/36479] hbase.HBaseTestingUtility(1293): Minicluster is down