2023-07-24 21:10:30,117 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f 2023-07-24 21:10:30,141 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-24 21:10:30,166 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 21:10:30,167 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1, deleteOnExit=true 2023-07-24 21:10:30,167 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 21:10:30,168 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/test.cache.data in system properties and HBase conf 2023-07-24 21:10:30,168 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 21:10:30,170 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir in system properties and HBase conf 2023-07-24 21:10:30,171 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 21:10:30,172 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 21:10:30,172 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 21:10:30,339 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 21:10:30,809 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 21:10:30,815 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 21:10:30,816 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 21:10:30,816 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 21:10:30,817 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 21:10:30,817 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 21:10:30,817 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 21:10:30,818 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 21:10:30,818 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 21:10:30,819 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 21:10:30,819 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/nfs.dump.dir in system properties and HBase conf 2023-07-24 21:10:30,819 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/java.io.tmpdir in system properties and HBase conf 2023-07-24 21:10:30,820 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 21:10:30,820 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 21:10:30,821 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 21:10:31,366 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 21:10:31,370 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 21:10:31,703 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 21:10:31,887 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 21:10:31,905 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:10:31,951 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:10:31,988 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/java.io.tmpdir/Jetty_localhost_46113_hdfs____5qzpbv/webapp 2023-07-24 21:10:32,151 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46113 2023-07-24 21:10:32,197 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 21:10:32,197 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 21:10:32,750 WARN [Listener at localhost/44343] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:10:32,835 WARN [Listener at localhost/44343] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:10:32,855 WARN [Listener at localhost/44343] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:10:32,863 INFO [Listener at localhost/44343] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:10:32,869 INFO [Listener at localhost/44343] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/java.io.tmpdir/Jetty_localhost_40753_datanode____.arbzwg/webapp 2023-07-24 21:10:32,982 INFO [Listener at localhost/44343] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40753 2023-07-24 21:10:33,433 WARN [Listener at localhost/39293] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:10:33,489 WARN [Listener at localhost/39293] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:10:33,498 WARN [Listener at localhost/39293] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:10:33,501 INFO [Listener at localhost/39293] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:10:33,511 INFO [Listener at localhost/39293] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/java.io.tmpdir/Jetty_localhost_35763_datanode____.coqjd3/webapp 2023-07-24 21:10:33,632 INFO [Listener at localhost/39293] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35763 2023-07-24 21:10:33,655 WARN [Listener at localhost/45363] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:10:33,716 WARN [Listener at localhost/45363] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:10:33,726 WARN [Listener at localhost/45363] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:10:33,728 INFO [Listener at localhost/45363] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:10:33,740 INFO [Listener at localhost/45363] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/java.io.tmpdir/Jetty_localhost_34437_datanode____.2c3476/webapp 2023-07-24 21:10:33,876 INFO [Listener at localhost/45363] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34437 2023-07-24 21:10:33,887 WARN [Listener at localhost/42247] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:10:34,116 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb99dbccd77b35d63: Processing first storage report for DS-81618ec5-5d71-4828-a561-d1a2477475b2 from datanode 0f426e30-868f-4b8e-bbc4-a8d11e6da9c3 2023-07-24 21:10:34,118 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb99dbccd77b35d63: from storage DS-81618ec5-5d71-4828-a561-d1a2477475b2 node DatanodeRegistration(127.0.0.1:38973, datanodeUuid=0f426e30-868f-4b8e-bbc4-a8d11e6da9c3, infoPort=35663, infoSecurePort=0, ipcPort=45363, storageInfo=lv=-57;cid=testClusterID;nsid=1545964436;c=1690233031453), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-24 21:10:34,119 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d0614641600b46a: Processing first storage report for DS-84c42161-298e-4ed1-a9d8-1be7f73287ea from datanode 59e611f2-095b-4fe6-b4e9-83977eff25c7 2023-07-24 21:10:34,119 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d0614641600b46a: from storage DS-84c42161-298e-4ed1-a9d8-1be7f73287ea node DatanodeRegistration(127.0.0.1:33907, datanodeUuid=59e611f2-095b-4fe6-b4e9-83977eff25c7, infoPort=35089, infoSecurePort=0, ipcPort=42247, storageInfo=lv=-57;cid=testClusterID;nsid=1545964436;c=1690233031453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:10:34,119 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb43acfb41ba2a5bf: Processing first storage report for DS-610343df-8cd3-412f-9a03-7632737f42f0 from datanode 3fedd356-1388-40ee-8b14-2235f4bcff53 2023-07-24 21:10:34,119 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb43acfb41ba2a5bf: from storage DS-610343df-8cd3-412f-9a03-7632737f42f0 node DatanodeRegistration(127.0.0.1:46493, datanodeUuid=3fedd356-1388-40ee-8b14-2235f4bcff53, infoPort=34535, infoSecurePort=0, ipcPort=39293, storageInfo=lv=-57;cid=testClusterID;nsid=1545964436;c=1690233031453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:10:34,119 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb99dbccd77b35d63: Processing first storage report for DS-c736ce60-1470-414d-9d2a-e9bf42d4d9fc from datanode 0f426e30-868f-4b8e-bbc4-a8d11e6da9c3 2023-07-24 21:10:34,120 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb99dbccd77b35d63: from storage DS-c736ce60-1470-414d-9d2a-e9bf42d4d9fc node DatanodeRegistration(127.0.0.1:38973, datanodeUuid=0f426e30-868f-4b8e-bbc4-a8d11e6da9c3, infoPort=35663, infoSecurePort=0, ipcPort=45363, storageInfo=lv=-57;cid=testClusterID;nsid=1545964436;c=1690233031453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:10:34,120 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d0614641600b46a: Processing first storage report for DS-bd1c8477-0883-4bb0-bb07-144d791a898d from datanode 59e611f2-095b-4fe6-b4e9-83977eff25c7 2023-07-24 21:10:34,120 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d0614641600b46a: from storage DS-bd1c8477-0883-4bb0-bb07-144d791a898d node DatanodeRegistration(127.0.0.1:33907, datanodeUuid=59e611f2-095b-4fe6-b4e9-83977eff25c7, infoPort=35089, infoSecurePort=0, ipcPort=42247, storageInfo=lv=-57;cid=testClusterID;nsid=1545964436;c=1690233031453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:10:34,120 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb43acfb41ba2a5bf: Processing first storage report for DS-727e0f05-ab69-44b8-a421-3196e78e1c72 from datanode 3fedd356-1388-40ee-8b14-2235f4bcff53 2023-07-24 21:10:34,120 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb43acfb41ba2a5bf: from storage DS-727e0f05-ab69-44b8-a421-3196e78e1c72 node DatanodeRegistration(127.0.0.1:46493, datanodeUuid=3fedd356-1388-40ee-8b14-2235f4bcff53, infoPort=34535, infoSecurePort=0, ipcPort=39293, storageInfo=lv=-57;cid=testClusterID;nsid=1545964436;c=1690233031453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:10:34,416 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f 2023-07-24 21:10:34,504 INFO [Listener at localhost/42247] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/zookeeper_0, clientPort=59094, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 21:10:34,520 INFO [Listener at localhost/42247] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59094 2023-07-24 21:10:34,530 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:34,532 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:35,255 INFO [Listener at localhost/42247] util.FSUtils(471): Created version file at hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7 with version=8 2023-07-24 21:10:35,255 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/hbase-staging 2023-07-24 21:10:35,268 DEBUG [Listener at localhost/42247] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 21:10:35,268 DEBUG [Listener at localhost/42247] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 21:10:35,268 DEBUG [Listener at localhost/42247] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 21:10:35,268 DEBUG [Listener at localhost/42247] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 21:10:35,664 INFO [Listener at localhost/42247] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 21:10:36,284 INFO [Listener at localhost/42247] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:10:36,334 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:36,335 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:36,335 INFO [Listener at localhost/42247] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:10:36,335 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:36,335 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:10:36,516 INFO [Listener at localhost/42247] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:10:36,641 DEBUG [Listener at localhost/42247] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 21:10:36,755 INFO [Listener at localhost/42247] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37361 2023-07-24 21:10:36,770 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:36,774 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:36,807 INFO [Listener at localhost/42247] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37361 connecting to ZooKeeper ensemble=127.0.0.1:59094 2023-07-24 21:10:36,871 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:373610x0, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:10:36,874 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37361-0x101992bd9f80000 connected 2023-07-24 21:10:36,903 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:10:36,904 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:10:36,908 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:10:36,919 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37361 2023-07-24 21:10:36,920 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37361 2023-07-24 21:10:36,922 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37361 2023-07-24 21:10:36,924 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37361 2023-07-24 21:10:36,924 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37361 2023-07-24 21:10:36,968 INFO [Listener at localhost/42247] log.Log(170): Logging initialized @7614ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 21:10:37,140 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:10:37,141 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:10:37,141 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:10:37,143 INFO [Listener at localhost/42247] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 21:10:37,144 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:10:37,144 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:10:37,148 INFO [Listener at localhost/42247] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:10:37,218 INFO [Listener at localhost/42247] http.HttpServer(1146): Jetty bound to port 43473 2023-07-24 21:10:37,220 INFO [Listener at localhost/42247] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:10:37,266 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,271 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1fef8f64{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:10:37,272 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,272 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@352e26e6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:10:37,365 INFO [Listener at localhost/42247] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:10:37,380 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:10:37,381 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:10:37,384 INFO [Listener at localhost/42247] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:10:37,394 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,426 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7d3907af{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 21:10:37,440 INFO [Listener at localhost/42247] server.AbstractConnector(333): Started ServerConnector@3b87408a{HTTP/1.1, (http/1.1)}{0.0.0.0:43473} 2023-07-24 21:10:37,441 INFO [Listener at localhost/42247] server.Server(415): Started @8087ms 2023-07-24 21:10:37,445 INFO [Listener at localhost/42247] master.HMaster(444): hbase.rootdir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7, hbase.cluster.distributed=false 2023-07-24 21:10:37,534 INFO [Listener at localhost/42247] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:10:37,534 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,535 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,535 INFO [Listener at localhost/42247] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:10:37,535 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,535 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:10:37,543 INFO [Listener at localhost/42247] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:10:37,546 INFO [Listener at localhost/42247] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39543 2023-07-24 21:10:37,549 INFO [Listener at localhost/42247] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:10:37,557 DEBUG [Listener at localhost/42247] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:10:37,558 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,561 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,563 INFO [Listener at localhost/42247] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39543 connecting to ZooKeeper ensemble=127.0.0.1:59094 2023-07-24 21:10:37,572 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:395430x0, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:10:37,574 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39543-0x101992bd9f80001 connected 2023-07-24 21:10:37,574 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:10:37,575 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:10:37,576 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:10:37,577 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39543 2023-07-24 21:10:37,577 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39543 2023-07-24 21:10:37,578 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39543 2023-07-24 21:10:37,582 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39543 2023-07-24 21:10:37,583 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39543 2023-07-24 21:10:37,586 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:10:37,586 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:10:37,586 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:10:37,588 INFO [Listener at localhost/42247] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:10:37,588 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:10:37,588 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:10:37,588 INFO [Listener at localhost/42247] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:10:37,590 INFO [Listener at localhost/42247] http.HttpServer(1146): Jetty bound to port 46621 2023-07-24 21:10:37,591 INFO [Listener at localhost/42247] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:10:37,600 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,600 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@58393e7a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:10:37,601 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,601 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@43eef2eb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:10:37,615 INFO [Listener at localhost/42247] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:10:37,616 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:10:37,616 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:10:37,617 INFO [Listener at localhost/42247] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 21:10:37,619 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,623 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e477f98{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:10:37,625 INFO [Listener at localhost/42247] server.AbstractConnector(333): Started ServerConnector@1f3e1e33{HTTP/1.1, (http/1.1)}{0.0.0.0:46621} 2023-07-24 21:10:37,625 INFO [Listener at localhost/42247] server.Server(415): Started @8271ms 2023-07-24 21:10:37,638 INFO [Listener at localhost/42247] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:10:37,638 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,638 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,639 INFO [Listener at localhost/42247] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:10:37,639 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,639 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:10:37,639 INFO [Listener at localhost/42247] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:10:37,641 INFO [Listener at localhost/42247] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35829 2023-07-24 21:10:37,641 INFO [Listener at localhost/42247] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:10:37,642 DEBUG [Listener at localhost/42247] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:10:37,643 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,645 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,646 INFO [Listener at localhost/42247] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35829 connecting to ZooKeeper ensemble=127.0.0.1:59094 2023-07-24 21:10:37,649 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:358290x0, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:10:37,650 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35829-0x101992bd9f80002 connected 2023-07-24 21:10:37,650 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:10:37,651 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:10:37,652 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:10:37,659 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35829 2023-07-24 21:10:37,659 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35829 2023-07-24 21:10:37,660 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35829 2023-07-24 21:10:37,660 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35829 2023-07-24 21:10:37,661 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35829 2023-07-24 21:10:37,663 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:10:37,664 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:10:37,664 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:10:37,664 INFO [Listener at localhost/42247] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:10:37,664 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:10:37,665 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:10:37,665 INFO [Listener at localhost/42247] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:10:37,665 INFO [Listener at localhost/42247] http.HttpServer(1146): Jetty bound to port 40175 2023-07-24 21:10:37,666 INFO [Listener at localhost/42247] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:10:37,667 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,668 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f564b48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:10:37,668 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,668 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f034a4d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:10:37,677 INFO [Listener at localhost/42247] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:10:37,677 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:10:37,678 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:10:37,678 INFO [Listener at localhost/42247] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:10:37,679 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,680 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1ca57311{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:10:37,681 INFO [Listener at localhost/42247] server.AbstractConnector(333): Started ServerConnector@b5441b{HTTP/1.1, (http/1.1)}{0.0.0.0:40175} 2023-07-24 21:10:37,681 INFO [Listener at localhost/42247] server.Server(415): Started @8327ms 2023-07-24 21:10:37,695 INFO [Listener at localhost/42247] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:10:37,695 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,696 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,696 INFO [Listener at localhost/42247] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:10:37,696 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:37,696 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:10:37,696 INFO [Listener at localhost/42247] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:10:37,698 INFO [Listener at localhost/42247] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40083 2023-07-24 21:10:37,698 INFO [Listener at localhost/42247] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:10:37,700 DEBUG [Listener at localhost/42247] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:10:37,701 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,703 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,705 INFO [Listener at localhost/42247] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40083 connecting to ZooKeeper ensemble=127.0.0.1:59094 2023-07-24 21:10:37,710 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:400830x0, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:10:37,711 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:400830x0, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:10:37,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40083-0x101992bd9f80003 connected 2023-07-24 21:10:37,714 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:10:37,715 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:10:37,719 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40083 2023-07-24 21:10:37,722 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40083 2023-07-24 21:10:37,723 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40083 2023-07-24 21:10:37,723 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40083 2023-07-24 21:10:37,724 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40083 2023-07-24 21:10:37,726 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:10:37,726 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:10:37,727 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:10:37,727 INFO [Listener at localhost/42247] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:10:37,727 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:10:37,727 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:10:37,727 INFO [Listener at localhost/42247] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:10:37,728 INFO [Listener at localhost/42247] http.HttpServer(1146): Jetty bound to port 42111 2023-07-24 21:10:37,728 INFO [Listener at localhost/42247] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:10:37,733 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,733 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e754d6b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:10:37,734 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,734 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41aab68c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:10:37,742 INFO [Listener at localhost/42247] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:10:37,743 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:10:37,743 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:10:37,743 INFO [Listener at localhost/42247] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 21:10:37,744 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:37,745 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@273cd09d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:10:37,746 INFO [Listener at localhost/42247] server.AbstractConnector(333): Started ServerConnector@25fe622e{HTTP/1.1, (http/1.1)}{0.0.0.0:42111} 2023-07-24 21:10:37,746 INFO [Listener at localhost/42247] server.Server(415): Started @8392ms 2023-07-24 21:10:37,751 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:10:37,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5253f0fe{HTTP/1.1, (http/1.1)}{0.0.0.0:41509} 2023-07-24 21:10:37,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8401ms 2023-07-24 21:10:37,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:37,765 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 21:10:37,767 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:37,789 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:10:37,789 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:10:37,789 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:10:37,789 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:10:37,790 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:37,791 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:10:37,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37361,1690233035466 from backup master directory 2023-07-24 21:10:37,793 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:10:37,797 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:37,797 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 21:10:37,798 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:10:37,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:37,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 21:10:37,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 21:10:37,891 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/hbase.id with ID: 939e4b99-f9b3-4bdf-93a6-9abec0ed80a2 2023-07-24 21:10:37,933 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:37,950 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:38,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x47902e6c to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:38,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1942be44, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:38,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:38,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 21:10:38,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 21:10:38,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 21:10:38,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 21:10:38,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 21:10:38,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:38,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store-tmp 2023-07-24 21:10:38,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:38,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 21:10:38,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:10:38,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:10:38,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 21:10:38,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:10:38,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:10:38,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:10:38,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/WALs/jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:38,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37361%2C1690233035466, suffix=, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/WALs/jenkins-hbase4.apache.org,37361,1690233035466, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/oldWALs, maxLogs=10 2023-07-24 21:10:38,262 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:38,262 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:38,262 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:38,271 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 21:10:38,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/WALs/jenkins-hbase4.apache.org,37361,1690233035466/jenkins-hbase4.apache.org%2C37361%2C1690233035466.1690233038216 2023-07-24 21:10:38,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK], DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK], DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK]] 2023-07-24 21:10:38,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:38,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:38,352 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:10:38,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:10:38,439 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:10:38,448 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 21:10:38,485 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 21:10:38,502 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:38,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:10:38,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:10:38,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:10:38,533 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:38,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11630598720, jitterRate=0.08318391442298889}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:38,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:10:38,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 21:10:38,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 21:10:38,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 21:10:38,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 21:10:38,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-24 21:10:38,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 49 msec 2023-07-24 21:10:38,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 21:10:38,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 21:10:38,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 21:10:38,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 21:10:38,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 21:10:38,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 21:10:38,680 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:38,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 21:10:38,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 21:10:38,702 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 21:10:38,708 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:10:38,708 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:10:38,708 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:10:38,708 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:10:38,709 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:38,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37361,1690233035466, sessionid=0x101992bd9f80000, setting cluster-up flag (Was=false) 2023-07-24 21:10:38,731 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:38,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 21:10:38,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:38,757 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:38,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 21:10:38,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:38,768 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.hbase-snapshot/.tmp 2023-07-24 21:10:38,850 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(951): ClusterId : 939e4b99-f9b3-4bdf-93a6-9abec0ed80a2 2023-07-24 21:10:38,851 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(951): ClusterId : 939e4b99-f9b3-4bdf-93a6-9abec0ed80a2 2023-07-24 21:10:38,850 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(951): ClusterId : 939e4b99-f9b3-4bdf-93a6-9abec0ed80a2 2023-07-24 21:10:38,859 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:10:38,859 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:10:38,859 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:10:38,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 21:10:38,868 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:10:38,868 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:10:38,868 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:10:38,868 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:10:38,868 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:10:38,868 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:10:38,872 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:10:38,872 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:10:38,872 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:10:38,877 DEBUG [RS:0;jenkins-hbase4:39543] zookeeper.ReadOnlyZKClient(139): Connect 0x2cb19361 to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:38,878 DEBUG [RS:2;jenkins-hbase4:40083] zookeeper.ReadOnlyZKClient(139): Connect 0x7996ac9f to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:38,878 DEBUG [RS:1;jenkins-hbase4:35829] zookeeper.ReadOnlyZKClient(139): Connect 0x4e64ade4 to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:38,888 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 21:10:38,892 DEBUG [RS:0;jenkins-hbase4:39543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22767887, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:38,892 DEBUG [RS:0;jenkins-hbase4:39543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2649f811, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:10:38,897 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:10:38,897 DEBUG [RS:2;jenkins-hbase4:40083] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77515ddd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:38,898 DEBUG [RS:2;jenkins-hbase4:40083] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@736ec7ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:10:38,899 DEBUG [RS:1;jenkins-hbase4:35829] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40a35a58, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:38,899 DEBUG [RS:1;jenkins-hbase4:35829] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@449cc694, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:10:38,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 21:10:38,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 21:10:38,925 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40083 2023-07-24 21:10:38,929 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39543 2023-07-24 21:10:38,931 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35829 2023-07-24 21:10:38,934 INFO [RS:2;jenkins-hbase4:40083] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:10:38,934 INFO [RS:2;jenkins-hbase4:40083] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:10:38,934 INFO [RS:0;jenkins-hbase4:39543] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:10:38,935 INFO [RS:0;jenkins-hbase4:39543] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:10:38,934 INFO [RS:1;jenkins-hbase4:35829] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:10:38,935 INFO [RS:1;jenkins-hbase4:35829] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:10:38,935 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:10:38,935 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:10:38,935 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:10:38,939 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:40083, startcode=1690233037694 2023-07-24 21:10:38,939 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:39543, startcode=1690233037533 2023-07-24 21:10:38,939 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:35829, startcode=1690233037637 2023-07-24 21:10:38,963 DEBUG [RS:0;jenkins-hbase4:39543] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:10:38,963 DEBUG [RS:2;jenkins-hbase4:40083] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:10:38,963 DEBUG [RS:1;jenkins-hbase4:35829] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:10:39,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 21:10:39,031 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56721, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:10:39,031 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33465, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:10:39,031 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50899, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:10:39,042 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:39,052 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:39,054 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:39,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 21:10:39,075 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 21:10:39,075 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 21:10:39,076 WARN [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 21:10:39,075 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 21:10:39,076 WARN [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 21:10:39,076 WARN [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 21:10:39,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 21:10:39,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 21:10:39,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 21:10:39,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:10:39,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:10:39,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:10:39,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:10:39,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 21:10:39,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:10:39,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690233069082 2023-07-24 21:10:39,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 21:10:39,089 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 21:10:39,090 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 21:10:39,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 21:10:39,094 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:39,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 21:10:39,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 21:10:39,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 21:10:39,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 21:10:39,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 21:10:39,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 21:10:39,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 21:10:39,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 21:10:39,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 21:10:39,121 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233039121,5,FailOnTimeoutGroup] 2023-07-24 21:10:39,122 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233039122,5,FailOnTimeoutGroup] 2023-07-24 21:10:39,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 21:10:39,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,186 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:39543, startcode=1690233037533 2023-07-24 21:10:39,187 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:35829, startcode=1690233037637 2023-07-24 21:10:39,190 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:40083, startcode=1690233037694 2023-07-24 21:10:39,203 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,207 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:10:39,208 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 21:10:39,212 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:39,213 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:39,213 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,213 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:10:39,213 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7 2023-07-24 21:10:39,214 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 21:10:39,214 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7 2023-07-24 21:10:39,215 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44343 2023-07-24 21:10:39,215 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43473 2023-07-24 21:10:39,215 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,215 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:10:39,215 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 21:10:39,217 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7 2023-07-24 21:10:39,217 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44343 2023-07-24 21:10:39,218 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43473 2023-07-24 21:10:39,218 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7 2023-07-24 21:10:39,218 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44343 2023-07-24 21:10:39,218 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43473 2023-07-24 21:10:39,247 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:10:39,248 DEBUG [RS:1;jenkins-hbase4:35829] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,248 WARN [RS:1;jenkins-hbase4:35829] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:10:39,248 DEBUG [RS:2;jenkins-hbase4:40083] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,248 INFO [RS:1;jenkins-hbase4:35829] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:39,248 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,249 DEBUG [RS:0;jenkins-hbase4:39543] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,248 WARN [RS:2;jenkins-hbase4:40083] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:10:39,249 WARN [RS:0;jenkins-hbase4:39543] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:10:39,262 INFO [RS:2;jenkins-hbase4:40083] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:39,262 INFO [RS:0;jenkins-hbase4:39543] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:39,263 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,263 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,265 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39543,1690233037533] 2023-07-24 21:10:39,265 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40083,1690233037694] 2023-07-24 21:10:39,266 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35829,1690233037637] 2023-07-24 21:10:39,292 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:39,294 DEBUG [RS:0;jenkins-hbase4:39543] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,294 DEBUG [RS:2;jenkins-hbase4:40083] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,294 DEBUG [RS:1;jenkins-hbase4:35829] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,294 DEBUG [RS:2;jenkins-hbase4:40083] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,294 DEBUG [RS:0;jenkins-hbase4:39543] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,295 DEBUG [RS:1;jenkins-hbase4:35829] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,295 DEBUG [RS:2;jenkins-hbase4:40083] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,296 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:10:39,297 DEBUG [RS:0;jenkins-hbase4:39543] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,297 DEBUG [RS:1;jenkins-hbase4:35829] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,299 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info 2023-07-24 21:10:39,300 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:10:39,301 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:39,301 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:10:39,304 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:10:39,305 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:10:39,309 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:10:39,311 DEBUG [RS:0;jenkins-hbase4:39543] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:10:39,311 DEBUG [RS:1;jenkins-hbase4:35829] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:10:39,313 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:39,313 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:10:39,317 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table 2023-07-24 21:10:39,318 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:10:39,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:39,321 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740 2023-07-24 21:10:39,323 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740 2023-07-24 21:10:39,327 INFO [RS:1;jenkins-hbase4:35829] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:10:39,328 INFO [RS:0;jenkins-hbase4:39543] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:10:39,327 INFO [RS:2;jenkins-hbase4:40083] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:10:39,330 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:10:39,333 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:10:39,350 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:39,352 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10890130400, jitterRate=0.01422242820262909}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:10:39,353 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:10:39,353 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:10:39,353 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:10:39,353 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:10:39,353 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:10:39,353 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:10:39,354 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:10:39,355 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:10:39,361 INFO [RS:2;jenkins-hbase4:40083] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:10:39,362 INFO [RS:1;jenkins-hbase4:35829] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:10:39,363 INFO [RS:0;jenkins-hbase4:39543] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:10:39,365 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 21:10:39,365 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 21:10:39,371 INFO [RS:1;jenkins-hbase4:35829] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:10:39,371 INFO [RS:2;jenkins-hbase4:40083] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:10:39,371 INFO [RS:0;jenkins-hbase4:39543] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:10:39,372 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,371 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,372 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,373 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:10:39,373 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:10:39,373 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:10:39,378 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 21:10:39,382 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,382 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,382 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,383 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,383 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,382 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,383 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,383 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:10:39,384 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,384 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:10:39,385 DEBUG [RS:2;jenkins-hbase4:40083] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:10:39,385 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:0;jenkins-hbase4:39543] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,385 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,386 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,386 DEBUG [RS:1;jenkins-hbase4:35829] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:39,390 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,391 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,391 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,402 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,402 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 21:10:39,402 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,402 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,405 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,405 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,405 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,407 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 21:10:39,423 INFO [RS:0;jenkins-hbase4:39543] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:10:39,428 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39543,1690233037533-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,429 INFO [RS:1;jenkins-hbase4:35829] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:10:39,430 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35829,1690233037637-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,427 INFO [RS:2;jenkins-hbase4:40083] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:10:39,435 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40083,1690233037694-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:39,454 INFO [RS:0;jenkins-hbase4:39543] regionserver.Replication(203): jenkins-hbase4.apache.org,39543,1690233037533 started 2023-07-24 21:10:39,457 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39543,1690233037533, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39543, sessionid=0x101992bd9f80001 2023-07-24 21:10:39,457 INFO [RS:1;jenkins-hbase4:35829] regionserver.Replication(203): jenkins-hbase4.apache.org,35829,1690233037637 started 2023-07-24 21:10:39,457 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:10:39,457 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35829,1690233037637, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35829, sessionid=0x101992bd9f80002 2023-07-24 21:10:39,457 DEBUG [RS:0;jenkins-hbase4:39543] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,458 INFO [RS:2;jenkins-hbase4:40083] regionserver.Replication(203): jenkins-hbase4.apache.org,40083,1690233037694 started 2023-07-24 21:10:39,458 DEBUG [RS:0;jenkins-hbase4:39543] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39543,1690233037533' 2023-07-24 21:10:39,458 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:10:39,458 DEBUG [RS:0;jenkins-hbase4:39543] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:10:39,458 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40083,1690233037694, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40083, sessionid=0x101992bd9f80003 2023-07-24 21:10:39,458 DEBUG [RS:1;jenkins-hbase4:35829] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,459 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:10:39,461 DEBUG [RS:2;jenkins-hbase4:40083] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,461 DEBUG [RS:2;jenkins-hbase4:40083] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40083,1690233037694' 2023-07-24 21:10:39,459 DEBUG [RS:1;jenkins-hbase4:35829] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35829,1690233037637' 2023-07-24 21:10:39,461 DEBUG [RS:2;jenkins-hbase4:40083] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:10:39,461 DEBUG [RS:1;jenkins-hbase4:35829] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:10:39,461 DEBUG [RS:0;jenkins-hbase4:39543] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:10:39,462 DEBUG [RS:2;jenkins-hbase4:40083] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:10:39,462 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:10:39,462 DEBUG [RS:1;jenkins-hbase4:35829] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:10:39,462 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:10:39,462 DEBUG [RS:0;jenkins-hbase4:39543] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,462 DEBUG [RS:0;jenkins-hbase4:39543] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39543,1690233037533' 2023-07-24 21:10:39,462 DEBUG [RS:0;jenkins-hbase4:39543] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:10:39,463 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:10:39,463 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:10:39,463 DEBUG [RS:2;jenkins-hbase4:40083] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:39,464 DEBUG [RS:2;jenkins-hbase4:40083] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40083,1690233037694' 2023-07-24 21:10:39,464 DEBUG [RS:2;jenkins-hbase4:40083] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:10:39,463 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:10:39,464 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:10:39,464 DEBUG [RS:1;jenkins-hbase4:35829] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:39,464 DEBUG [RS:1;jenkins-hbase4:35829] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35829,1690233037637' 2023-07-24 21:10:39,464 DEBUG [RS:1;jenkins-hbase4:35829] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:10:39,464 DEBUG [RS:2;jenkins-hbase4:40083] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:10:39,465 DEBUG [RS:1;jenkins-hbase4:35829] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:10:39,465 DEBUG [RS:2;jenkins-hbase4:40083] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:10:39,465 INFO [RS:2;jenkins-hbase4:40083] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:10:39,465 INFO [RS:2;jenkins-hbase4:40083] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:10:39,465 DEBUG [RS:1;jenkins-hbase4:35829] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:10:39,466 INFO [RS:1;jenkins-hbase4:35829] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:10:39,464 DEBUG [RS:0;jenkins-hbase4:39543] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:10:39,466 INFO [RS:1;jenkins-hbase4:35829] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:10:39,467 DEBUG [RS:0;jenkins-hbase4:39543] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:10:39,467 INFO [RS:0;jenkins-hbase4:39543] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:10:39,467 INFO [RS:0;jenkins-hbase4:39543] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:10:39,562 DEBUG [jenkins-hbase4:37361] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 21:10:39,581 DEBUG [jenkins-hbase4:37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:39,584 INFO [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39543%2C1690233037533, suffix=, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,39543,1690233037533, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs, maxLogs=32 2023-07-24 21:10:39,584 INFO [RS:1;jenkins-hbase4:35829] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35829%2C1690233037637, suffix=, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,35829,1690233037637, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs, maxLogs=32 2023-07-24 21:10:39,584 INFO [RS:2;jenkins-hbase4:40083] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40083%2C1690233037694, suffix=, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,40083,1690233037694, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs, maxLogs=32 2023-07-24 21:10:39,586 DEBUG [jenkins-hbase4:37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:39,586 DEBUG [jenkins-hbase4:37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:39,586 DEBUG [jenkins-hbase4:37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:39,586 DEBUG [jenkins-hbase4:37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:39,594 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39543,1690233037533, state=OPENING 2023-07-24 21:10:39,606 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 21:10:39,612 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:39,614 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:39,615 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:10:39,614 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:39,614 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:39,622 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:39,632 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:39,632 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:39,635 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:39,639 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:39,641 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:39,641 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:39,651 INFO [RS:1;jenkins-hbase4:35829] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,35829,1690233037637/jenkins-hbase4.apache.org%2C35829%2C1690233037637.1690233039591 2023-07-24 21:10:39,651 INFO [RS:2;jenkins-hbase4:40083] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,40083,1690233037694/jenkins-hbase4.apache.org%2C40083%2C1690233037694.1690233039590 2023-07-24 21:10:39,655 DEBUG [RS:1;jenkins-hbase4:35829] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK], DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK], DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK]] 2023-07-24 21:10:39,655 INFO [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,39543,1690233037533/jenkins-hbase4.apache.org%2C39543%2C1690233037533.1690233039591 2023-07-24 21:10:39,658 DEBUG [RS:2;jenkins-hbase4:40083] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK], DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK], DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK]] 2023-07-24 21:10:39,659 DEBUG [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK], DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK], DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK]] 2023-07-24 21:10:39,709 WARN [ReadOnlyZKClient-127.0.0.1:59094@0x47902e6c] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 21:10:39,737 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:39,744 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60530, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:39,745 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39543] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:60530 deadline: 1690233099745, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,827 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:39,831 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:39,836 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:39,850 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 21:10:39,850 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:39,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39543%2C1690233037533.meta, suffix=.meta, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,39543,1690233037533, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs, maxLogs=32 2023-07-24 21:10:39,891 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:39,891 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:39,895 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:39,910 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,39543,1690233037533/jenkins-hbase4.apache.org%2C39543%2C1690233037533.meta.1690233039856.meta 2023-07-24 21:10:39,913 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK], DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK], DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK]] 2023-07-24 21:10:39,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:39,916 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:10:39,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 21:10:39,921 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 21:10:39,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 21:10:39,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:39,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 21:10:39,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 21:10:39,934 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:10:39,936 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info 2023-07-24 21:10:39,936 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info 2023-07-24 21:10:39,937 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:10:39,938 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:39,938 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:10:39,940 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:10:39,940 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:10:39,940 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:10:39,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:39,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:10:39,944 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table 2023-07-24 21:10:39,944 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table 2023-07-24 21:10:39,944 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:10:39,945 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:39,947 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740 2023-07-24 21:10:39,950 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740 2023-07-24 21:10:39,955 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:10:39,957 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:10:39,958 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9398800160, jitterRate=-0.12466852366924286}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:10:39,959 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:10:39,973 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690233039824 2023-07-24 21:10:39,994 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 21:10:39,995 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 21:10:39,996 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39543,1690233037533, state=OPEN 2023-07-24 21:10:39,999 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 21:10:40,000 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:10:40,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 21:10:40,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39543,1690233037533 in 377 msec 2023-07-24 21:10:40,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 21:10:40,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 633 msec 2023-07-24 21:10:40,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1140 sec 2023-07-24 21:10:40,025 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690233040025, completionTime=-1 2023-07-24 21:10:40,025 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 21:10:40,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 21:10:40,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 21:10:40,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690233100090 2023-07-24 21:10:40,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690233160090 2023-07-24 21:10:40,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 64 msec 2023-07-24 21:10:40,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37361,1690233035466-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:40,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37361,1690233035466-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:40,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37361,1690233035466-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:40,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37361, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:40,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:40,124 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 21:10:40,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 21:10:40,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:40,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 21:10:40,163 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:40,167 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:40,186 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/namespace/27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,189 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/namespace/27723428b4c241280e87cd60e505360f empty. 2023-07-24 21:10:40,190 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/namespace/27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,190 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 21:10:40,247 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:40,249 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 27723428b4c241280e87cd60e505360f, NAME => 'hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:40,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:40,274 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 21:10:40,277 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:40,277 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 27723428b4c241280e87cd60e505360f, disabling compactions & flushes 2023-07-24 21:10:40,277 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,277 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,277 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. after waiting 0 ms 2023-07-24 21:10:40,277 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,277 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,277 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 27723428b4c241280e87cd60e505360f: 2023-07-24 21:10:40,278 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:40,280 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:40,282 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:40,284 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,285 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f empty. 2023-07-24 21:10:40,285 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,286 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 21:10:40,305 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233040285"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233040285"}]},"ts":"1690233040285"} 2023-07-24 21:10:40,328 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:40,333 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0aa6c5b31ae7fded5577dadecfbf135f, NAME => 'hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:40,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:40,365 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 0aa6c5b31ae7fded5577dadecfbf135f, disabling compactions & flushes 2023-07-24 21:10:40,365 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,365 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,365 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. after waiting 0 ms 2023-07-24 21:10:40,365 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,365 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,365 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 0aa6c5b31ae7fded5577dadecfbf135f: 2023-07-24 21:10:40,368 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:40,371 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:40,371 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:40,372 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233040372"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233040372"}]},"ts":"1690233040372"} 2023-07-24 21:10:40,377 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:40,379 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233040371"}]},"ts":"1690233040371"} 2023-07-24 21:10:40,379 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:40,379 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233040379"}]},"ts":"1690233040379"} 2023-07-24 21:10:40,385 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 21:10:40,389 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 21:10:40,392 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:40,393 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:40,393 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:40,393 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:40,393 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:40,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, ASSIGN}] 2023-07-24 21:10:40,398 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, ASSIGN 2023-07-24 21:10:40,399 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:40,399 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:40,399 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:40,399 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:40,399 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:40,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, ASSIGN}] 2023-07-24 21:10:40,400 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:10:40,404 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, ASSIGN 2023-07-24 21:10:40,406 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:40,407 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 21:10:40,408 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=27723428b4c241280e87cd60e505360f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:40,408 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=0aa6c5b31ae7fded5577dadecfbf135f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:40,408 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233040408"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233040408"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233040408"}]},"ts":"1690233040408"} 2023-07-24 21:10:40,408 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233040408"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233040408"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233040408"}]},"ts":"1690233040408"} 2023-07-24 21:10:40,417 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 27723428b4c241280e87cd60e505360f, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:40,421 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:40,573 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:40,573 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:40,577 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50564, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:40,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0aa6c5b31ae7fded5577dadecfbf135f, NAME => 'hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:40,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:10:40,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. service=MultiRowMutationService 2023-07-24 21:10:40,586 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 21:10:40,587 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,587 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:40,587 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,587 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,593 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 27723428b4c241280e87cd60e505360f, NAME => 'hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:40,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,595 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:40,595 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,595 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,596 INFO [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,598 INFO [StoreOpener-27723428b4c241280e87cd60e505360f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,599 DEBUG [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m 2023-07-24 21:10:40,599 DEBUG [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m 2023-07-24 21:10:40,599 INFO [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0aa6c5b31ae7fded5577dadecfbf135f columnFamilyName m 2023-07-24 21:10:40,600 INFO [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] regionserver.HStore(310): Store=0aa6c5b31ae7fded5577dadecfbf135f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:40,600 DEBUG [StoreOpener-27723428b4c241280e87cd60e505360f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info 2023-07-24 21:10:40,601 DEBUG [StoreOpener-27723428b4c241280e87cd60e505360f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info 2023-07-24 21:10:40,601 INFO [StoreOpener-27723428b4c241280e87cd60e505360f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 27723428b4c241280e87cd60e505360f columnFamilyName info 2023-07-24 21:10:40,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,602 INFO [StoreOpener-27723428b4c241280e87cd60e505360f-1] regionserver.HStore(310): Store=27723428b4c241280e87cd60e505360f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:40,603 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:40,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:40,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:40,613 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0aa6c5b31ae7fded5577dadecfbf135f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2eb2d738, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:40,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0aa6c5b31ae7fded5577dadecfbf135f: 2023-07-24 21:10:40,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:40,615 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 27723428b4c241280e87cd60e505360f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9891210720, jitterRate=-0.0788092166185379}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:40,615 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 27723428b4c241280e87cd60e505360f: 2023-07-24 21:10:40,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f., pid=8, masterSystemTime=1690233040573 2023-07-24 21:10:40,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f., pid=9, masterSystemTime=1690233040577 2023-07-24 21:10:40,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,622 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:40,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,624 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=0aa6c5b31ae7fded5577dadecfbf135f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:40,624 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:40,624 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233040623"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233040623"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233040623"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233040623"}]},"ts":"1690233040623"} 2023-07-24 21:10:40,625 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=27723428b4c241280e87cd60e505360f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:40,625 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233040625"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233040625"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233040625"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233040625"}]},"ts":"1690233040625"} 2023-07-24 21:10:40,634 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 21:10:40,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,39543,1690233037533 in 207 msec 2023-07-24 21:10:40,639 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 21:10:40,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 27723428b4c241280e87cd60e505360f, server=jenkins-hbase4.apache.org,40083,1690233037694 in 213 msec 2023-07-24 21:10:40,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 21:10:40,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, ASSIGN in 236 msec 2023-07-24 21:10:40,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-24 21:10:40,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, ASSIGN in 245 msec 2023-07-24 21:10:40,645 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:40,645 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233040645"}]},"ts":"1690233040645"} 2023-07-24 21:10:40,646 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:40,646 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233040646"}]},"ts":"1690233040646"} 2023-07-24 21:10:40,648 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 21:10:40,649 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 21:10:40,651 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:40,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 382 msec 2023-07-24 21:10:40,657 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:40,662 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 509 msec 2023-07-24 21:10:40,665 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 21:10:40,667 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:10:40,667 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:40,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:40,699 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50568, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:40,704 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 21:10:40,704 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 21:10:40,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 21:10:40,744 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:10:40,750 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 44 msec 2023-07-24 21:10:40,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 21:10:40,768 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:10:40,781 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-24 21:10:40,792 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 21:10:40,795 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 21:10:40,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.997sec 2023-07-24 21:10:40,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 21:10:40,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 21:10:40,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 21:10:40,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37361,1690233035466-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 21:10:40,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37361,1690233035466-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 21:10:40,807 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:40,807 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:40,811 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:10:40,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 21:10:40,820 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 21:10:40,859 DEBUG [Listener at localhost/42247] zookeeper.ReadOnlyZKClient(139): Connect 0x49f725ef to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:40,864 DEBUG [Listener at localhost/42247] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@456579cb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:40,880 DEBUG [hconnection-0x62d0debf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:40,893 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:40,907 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:10:40,909 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:40,937 DEBUG [Listener at localhost/42247] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 21:10:40,961 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60356, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 21:10:40,978 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 21:10:40,978 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:10:40,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 21:10:40,985 DEBUG [Listener at localhost/42247] zookeeper.ReadOnlyZKClient(139): Connect 0x37bc4d17 to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:40,990 DEBUG [Listener at localhost/42247] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4feb285c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:40,990 INFO [Listener at localhost/42247] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59094 2023-07-24 21:10:40,998 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:10:41,002 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101992bd9f8000a connected 2023-07-24 21:10:41,031 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=422, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=177, AvailableMemoryMB=6471 2023-07-24 21:10:41,034 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-24 21:10:41,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:41,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:41,115 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:10:41,131 INFO [Listener at localhost/42247] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:10:41,135 INFO [Listener at localhost/42247] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43799 2023-07-24 21:10:41,136 INFO [Listener at localhost/42247] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:10:41,138 DEBUG [Listener at localhost/42247] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:10:41,140 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:41,147 INFO [Listener at localhost/42247] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:10:41,152 INFO [Listener at localhost/42247] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43799 connecting to ZooKeeper ensemble=127.0.0.1:59094 2023-07-24 21:10:41,160 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:437990x0, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:10:41,163 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(162): regionserver:437990x0, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:10:41,164 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(162): regionserver:437990x0, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 21:10:41,165 DEBUG [Listener at localhost/42247] zookeeper.ZKUtil(164): regionserver:437990x0, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:10:41,167 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43799-0x101992bd9f8000b connected 2023-07-24 21:10:41,167 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43799 2023-07-24 21:10:41,168 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43799 2023-07-24 21:10:41,171 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43799 2023-07-24 21:10:41,174 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43799 2023-07-24 21:10:41,178 DEBUG [Listener at localhost/42247] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43799 2023-07-24 21:10:41,181 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:10:41,181 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:10:41,181 INFO [Listener at localhost/42247] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:10:41,182 INFO [Listener at localhost/42247] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:10:41,182 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:10:41,182 INFO [Listener at localhost/42247] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:10:41,182 INFO [Listener at localhost/42247] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:10:41,182 INFO [Listener at localhost/42247] http.HttpServer(1146): Jetty bound to port 41501 2023-07-24 21:10:41,183 INFO [Listener at localhost/42247] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:10:41,186 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:41,186 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@161302d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:10:41,187 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:41,187 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@d4406e7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:10:41,199 INFO [Listener at localhost/42247] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:10:41,200 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:10:41,200 INFO [Listener at localhost/42247] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:10:41,200 INFO [Listener at localhost/42247] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:10:41,203 INFO [Listener at localhost/42247] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:10:41,205 INFO [Listener at localhost/42247] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4b822857{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:10:41,207 INFO [Listener at localhost/42247] server.AbstractConnector(333): Started ServerConnector@2d7a97c6{HTTP/1.1, (http/1.1)}{0.0.0.0:41501} 2023-07-24 21:10:41,207 INFO [Listener at localhost/42247] server.Server(415): Started @11853ms 2023-07-24 21:10:41,213 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(951): ClusterId : 939e4b99-f9b3-4bdf-93a6-9abec0ed80a2 2023-07-24 21:10:41,213 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:10:41,217 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:10:41,217 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:10:41,219 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:10:41,221 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ReadOnlyZKClient(139): Connect 0x74ae934f to 127.0.0.1:59094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:10:41,231 DEBUG [RS:3;jenkins-hbase4:43799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a6963aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:10:41,231 DEBUG [RS:3;jenkins-hbase4:43799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e56f83f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:10:41,242 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43799 2023-07-24 21:10:41,242 INFO [RS:3;jenkins-hbase4:43799] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:10:41,242 INFO [RS:3;jenkins-hbase4:43799] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:10:41,242 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:10:41,243 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37361,1690233035466 with isa=jenkins-hbase4.apache.org/172.31.14.131:43799, startcode=1690233041130 2023-07-24 21:10:41,243 DEBUG [RS:3;jenkins-hbase4:43799] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:10:41,250 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52255, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:10:41,251 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37361] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,251 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:10:41,252 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7 2023-07-24 21:10:41,252 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44343 2023-07-24 21:10:41,252 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43473 2023-07-24 21:10:41,258 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:10:41,258 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:10:41,258 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:10:41,258 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:41,258 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:10:41,259 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43799,1690233041130] 2023-07-24 21:10:41,259 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ZKUtil(162): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,259 WARN [RS:3;jenkins-hbase4:43799] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:10:41,260 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:10:41,260 INFO [RS:3;jenkins-hbase4:43799] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:41,260 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:41,260 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,260 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:41,260 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:41,267 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:41,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:41,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37361,1690233035466] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 21:10:41,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:41,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:41,270 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:41,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:41,271 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,272 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,279 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ZKUtil(162): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:41,280 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ZKUtil(162): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:41,280 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ZKUtil(162): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:41,281 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ZKUtil(162): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,283 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:10:41,283 INFO [RS:3;jenkins-hbase4:43799] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:10:41,287 INFO [RS:3;jenkins-hbase4:43799] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:10:41,291 INFO [RS:3;jenkins-hbase4:43799] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:10:41,291 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:41,293 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:10:41,294 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:41,295 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,295 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,295 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,296 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,296 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,296 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:10:41,296 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,297 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,297 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,297 DEBUG [RS:3;jenkins-hbase4:43799] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:10:41,304 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:41,304 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:41,304 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:41,321 INFO [RS:3;jenkins-hbase4:43799] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:10:41,321 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43799,1690233041130-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:10:41,333 INFO [RS:3;jenkins-hbase4:43799] regionserver.Replication(203): jenkins-hbase4.apache.org,43799,1690233041130 started 2023-07-24 21:10:41,334 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43799,1690233041130, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43799, sessionid=0x101992bd9f8000b 2023-07-24 21:10:41,334 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:10:41,334 DEBUG [RS:3;jenkins-hbase4:43799] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,334 DEBUG [RS:3;jenkins-hbase4:43799] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43799,1690233041130' 2023-07-24 21:10:41,334 DEBUG [RS:3;jenkins-hbase4:43799] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:10:41,334 DEBUG [RS:3;jenkins-hbase4:43799] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:10:41,335 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:10:41,335 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:10:41,335 DEBUG [RS:3;jenkins-hbase4:43799] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:41,335 DEBUG [RS:3;jenkins-hbase4:43799] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43799,1690233041130' 2023-07-24 21:10:41,335 DEBUG [RS:3;jenkins-hbase4:43799] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:10:41,335 DEBUG [RS:3;jenkins-hbase4:43799] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:10:41,336 DEBUG [RS:3;jenkins-hbase4:43799] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:10:41,336 INFO [RS:3;jenkins-hbase4:43799] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:10:41,336 INFO [RS:3;jenkins-hbase4:43799] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:10:41,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:41,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:41,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:41,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:41,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:41,356 DEBUG [hconnection-0x526d64d3-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:41,363 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60712, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:41,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:41,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:41,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:41,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:41,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:60356 deadline: 1690234241384, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:41,387 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:41,390 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:41,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:41,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:41,393 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:41,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:41,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:41,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:41,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:41,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:41,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:41,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:41,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:41,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:41,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:41,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:41,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:41,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:41,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:41,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:41,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:41,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:41,441 INFO [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43799%2C1690233041130, suffix=, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,43799,1690233041130, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs, maxLogs=32 2023-07-24 21:10:41,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(238): Moving server region 0aa6c5b31ae7fded5577dadecfbf135f, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:41,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, REOPEN/MOVE 2023-07-24 21:10:41,446 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, REOPEN/MOVE 2023-07-24 21:10:41,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:41,448 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0aa6c5b31ae7fded5577dadecfbf135f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:41,448 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233041448"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233041448"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233041448"}]},"ts":"1690233041448"} 2023-07-24 21:10:41,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 21:10:41,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-24 21:10:41,450 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 21:10:41,452 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39543,1690233037533, state=CLOSING 2023-07-24 21:10:41,452 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:41,453 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 21:10:41,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:41,453 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:10:41,460 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:41,478 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:41,478 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:41,479 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:41,486 INFO [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,43799,1690233041130/jenkins-hbase4.apache.org%2C43799%2C1690233041130.1690233041443 2023-07-24 21:10:41,486 DEBUG [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK], DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK], DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK]] 2023-07-24 21:10:41,616 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-24 21:10:41,617 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:10:41,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:10:41,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:10:41,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:10:41,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:10:41,619 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-24 21:10:41,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/info/a4ff62118094406f8797849c27790e53 2023-07-24 21:10:41,835 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/table/a6d3d2c79d444a43a66486b1c0a12857 2023-07-24 21:10:41,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/info/a4ff62118094406f8797849c27790e53 as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info/a4ff62118094406f8797849c27790e53 2023-07-24 21:10:41,871 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info/a4ff62118094406f8797849c27790e53, entries=21, sequenceid=15, filesize=7.1 K 2023-07-24 21:10:41,874 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/table/a6d3d2c79d444a43a66486b1c0a12857 as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table/a6d3d2c79d444a43a66486b1c0a12857 2023-07-24 21:10:41,884 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table/a6d3d2c79d444a43a66486b1c0a12857, entries=4, sequenceid=15, filesize=4.8 K 2023-07-24 21:10:41,896 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 277ms, sequenceid=15, compaction requested=false 2023-07-24 21:10:41,898 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 21:10:41,926 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-24 21:10:41,927 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:10:41,928 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:10:41,928 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:10:41,928 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,43799,1690233041130 record at close sequenceid=15 2023-07-24 21:10:41,938 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-24 21:10:41,942 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-24 21:10:41,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-24 21:10:41,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39543,1690233037533 in 489 msec 2023-07-24 21:10:41,950 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:42,100 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:42,100 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43799,1690233041130, state=OPENING 2023-07-24 21:10:42,102 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 21:10:42,102 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:10:42,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:42,259 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:42,259 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:42,263 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35386, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:42,278 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 21:10:42,279 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:10:42,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43799%2C1690233041130.meta, suffix=.meta, logDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,43799,1690233041130, archiveDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs, maxLogs=32 2023-07-24 21:10:42,320 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK] 2023-07-24 21:10:42,322 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK] 2023-07-24 21:10:42,322 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK] 2023-07-24 21:10:42,329 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/WALs/jenkins-hbase4.apache.org,43799,1690233041130/jenkins-hbase4.apache.org%2C43799%2C1690233041130.meta.1690233042283.meta 2023-07-24 21:10:42,329 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38973,DS-81618ec5-5d71-4828-a561-d1a2477475b2,DISK], DatanodeInfoWithStorage[127.0.0.1:46493,DS-610343df-8cd3-412f-9a03-7632737f42f0,DISK], DatanodeInfoWithStorage[127.0.0.1:33907,DS-84c42161-298e-4ed1-a9d8-1be7f73287ea,DISK]] 2023-07-24 21:10:42,329 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:42,329 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:10:42,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 21:10:42,330 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 21:10:42,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 21:10:42,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:42,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 21:10:42,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 21:10:42,332 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:10:42,333 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info 2023-07-24 21:10:42,333 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info 2023-07-24 21:10:42,334 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:10:42,345 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info/a4ff62118094406f8797849c27790e53 2023-07-24 21:10:42,346 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:42,346 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:10:42,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:10:42,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:10:42,348 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:10:42,349 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:42,349 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:10:42,350 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table 2023-07-24 21:10:42,350 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table 2023-07-24 21:10:42,350 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:10:42,360 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table/a6d3d2c79d444a43a66486b1c0a12857 2023-07-24 21:10:42,360 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:42,361 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740 2023-07-24 21:10:42,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740 2023-07-24 21:10:42,368 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:10:42,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:10:42,372 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11799052960, jitterRate=0.09887243807315826}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:10:42,372 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:10:42,373 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1690233042259 2023-07-24 21:10:42,377 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 21:10:42,378 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 21:10:42,379 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43799,1690233041130, state=OPEN 2023-07-24 21:10:42,380 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 21:10:42,381 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:10:42,385 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-24 21:10:42,385 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43799,1690233041130 in 278 msec 2023-07-24 21:10:42,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 938 msec 2023-07-24 21:10:42,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-24 21:10:42,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:42,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0aa6c5b31ae7fded5577dadecfbf135f, disabling compactions & flushes 2023-07-24 21:10:42,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:42,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:42,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. after waiting 0 ms 2023-07-24 21:10:42,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:42,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0aa6c5b31ae7fded5577dadecfbf135f 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-24 21:10:42,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/.tmp/m/e492b673ae014547a1792a0cb636080f 2023-07-24 21:10:42,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/.tmp/m/e492b673ae014547a1792a0cb636080f as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m/e492b673ae014547a1792a0cb636080f 2023-07-24 21:10:42,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m/e492b673ae014547a1792a0cb636080f, entries=3, sequenceid=9, filesize=5.2 K 2023-07-24 21:10:42,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for 0aa6c5b31ae7fded5577dadecfbf135f in 66ms, sequenceid=9, compaction requested=false 2023-07-24 21:10:42,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 21:10:42,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 21:10:42,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:10:42,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:42,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0aa6c5b31ae7fded5577dadecfbf135f: 2023-07-24 21:10:42,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0aa6c5b31ae7fded5577dadecfbf135f move to jenkins-hbase4.apache.org,43799,1690233041130 record at close sequenceid=9 2023-07-24 21:10:42,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:42,615 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0aa6c5b31ae7fded5577dadecfbf135f, regionState=CLOSED 2023-07-24 21:10:42,615 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233042615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233042615"}]},"ts":"1690233042615"} 2023-07-24 21:10:42,616 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39543] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:60530 deadline: 1690233102616, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43799 startCode=1690233041130. As of locationSeqNum=15. 2023-07-24 21:10:42,717 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:42,719 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35394, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:42,728 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-24 21:10:42,728 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,39543,1690233037533 in 1.2710 sec 2023-07-24 21:10:42,729 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:42,879 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:42,880 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0aa6c5b31ae7fded5577dadecfbf135f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:42,880 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233042880"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233042880"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233042880"}]},"ts":"1690233042880"} 2023-07-24 21:10:42,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:43,042 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:43,042 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0aa6c5b31ae7fded5577dadecfbf135f, NAME => 'hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:43,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:10:43,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. service=MultiRowMutationService 2023-07-24 21:10:43,043 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 21:10:43,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:43,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,045 INFO [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,046 DEBUG [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m 2023-07-24 21:10:43,046 DEBUG [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m 2023-07-24 21:10:43,047 INFO [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0aa6c5b31ae7fded5577dadecfbf135f columnFamilyName m 2023-07-24 21:10:43,057 DEBUG [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] regionserver.HStore(539): loaded hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m/e492b673ae014547a1792a0cb636080f 2023-07-24 21:10:43,057 INFO [StoreOpener-0aa6c5b31ae7fded5577dadecfbf135f-1] regionserver.HStore(310): Store=0aa6c5b31ae7fded5577dadecfbf135f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:43,058 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:10:43,068 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0aa6c5b31ae7fded5577dadecfbf135f; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@409e90d2, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:43,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0aa6c5b31ae7fded5577dadecfbf135f: 2023-07-24 21:10:43,069 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f., pid=17, masterSystemTime=1690233043037 2023-07-24 21:10:43,071 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:43,072 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:10:43,073 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=0aa6c5b31ae7fded5577dadecfbf135f, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:43,073 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233043073"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233043073"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233043073"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233043073"}]},"ts":"1690233043073"} 2023-07-24 21:10:43,078 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-24 21:10:43,078 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure 0aa6c5b31ae7fded5577dadecfbf135f, server=jenkins-hbase4.apache.org,43799,1690233041130 in 192 msec 2023-07-24 21:10:43,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=0aa6c5b31ae7fded5577dadecfbf135f, REOPEN/MOVE in 1.6350 sec 2023-07-24 21:10:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to default 2023-07-24 21:10:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:43,463 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39543] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:60712 deadline: 1690233103463, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43799 startCode=1690233041130. As of locationSeqNum=9. 2023-07-24 21:10:43,578 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39543] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:60712 deadline: 1690233103577, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43799 startCode=1690233041130. As of locationSeqNum=15. 2023-07-24 21:10:43,680 DEBUG [hconnection-0x526d64d3-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:43,687 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35408, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:43,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:43,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:43,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:43,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:43,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:43,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:43,727 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:43,730 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39543] ipc.CallRunner(144): callId: 46 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:60530 deadline: 1690233103729, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43799 startCode=1690233041130. As of locationSeqNum=9. 2023-07-24 21:10:43,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-24 21:10:43,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-24 21:10:43,836 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:43,837 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:43,837 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:43,838 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:43,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-24 21:10:43,847 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:43,854 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:43,854 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:43,854 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:43,854 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:43,854 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:43,855 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee empty. 2023-07-24 21:10:43,855 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 empty. 2023-07-24 21:10:43,855 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff empty. 2023-07-24 21:10:43,855 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 empty. 2023-07-24 21:10:43,856 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:43,856 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:43,856 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 empty. 2023-07-24 21:10:43,856 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:43,856 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:43,860 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:43,860 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 21:10:43,905 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:43,907 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => d0950f4e6f52cbf0f042339a231d7eff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:43,907 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9d3248c46bad1693a300ed900f0a3bb2, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:43,911 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9d1188490bf4ff5f0ca051ba710a55ee, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:43,968 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:43,969 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 9d1188490bf4ff5f0ca051ba710a55ee, disabling compactions & flushes 2023-07-24 21:10:43,969 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:43,969 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:43,969 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. after waiting 0 ms 2023-07-24 21:10:43,969 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:43,969 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:43,969 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 9d1188490bf4ff5f0ca051ba710a55ee: 2023-07-24 21:10:43,970 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 4c52d2ec88d6dbe4039a9f0da5976970, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:43,972 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:43,972 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:43,974 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 9d3248c46bad1693a300ed900f0a3bb2, disabling compactions & flushes 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing d0950f4e6f52cbf0f042339a231d7eff, disabling compactions & flushes 2023-07-24 21:10:43,975 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:43,975 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. after waiting 0 ms 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:43,975 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. after waiting 0 ms 2023-07-24 21:10:43,975 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for d0950f4e6f52cbf0f042339a231d7eff: 2023-07-24 21:10:43,976 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:43,976 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:43,977 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 9d3248c46bad1693a300ed900f0a3bb2: 2023-07-24 21:10:43,977 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => eb25f322fc71d3a1737fa27766ba99c0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:43,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:43,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 4c52d2ec88d6dbe4039a9f0da5976970, disabling compactions & flushes 2023-07-24 21:10:43,998 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:43,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:43,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. after waiting 0 ms 2023-07-24 21:10:43,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:43,998 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:43,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 4c52d2ec88d6dbe4039a9f0da5976970: 2023-07-24 21:10:44,022 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:44,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing eb25f322fc71d3a1737fa27766ba99c0, disabling compactions & flushes 2023-07-24 21:10:44,023 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. after waiting 0 ms 2023-07-24 21:10:44,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,023 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for eb25f322fc71d3a1737fa27766ba99c0: 2023-07-24 21:10:44,027 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:44,028 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233044028"}]},"ts":"1690233044028"} 2023-07-24 21:10:44,029 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233044028"}]},"ts":"1690233044028"} 2023-07-24 21:10:44,029 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233044028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233044028"}]},"ts":"1690233044028"} 2023-07-24 21:10:44,029 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233044028"}]},"ts":"1690233044028"} 2023-07-24 21:10:44,029 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233044028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233044028"}]},"ts":"1690233044028"} 2023-07-24 21:10:44,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-24 21:10:44,093 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 21:10:44,095 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:44,095 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233044095"}]},"ts":"1690233044095"} 2023-07-24 21:10:44,097 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 21:10:44,107 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:44,108 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:44,108 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:44,108 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:44,108 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, ASSIGN}] 2023-07-24 21:10:44,115 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, ASSIGN 2023-07-24 21:10:44,115 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, ASSIGN 2023-07-24 21:10:44,117 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, ASSIGN 2023-07-24 21:10:44,117 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, ASSIGN 2023-07-24 21:10:44,119 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, ASSIGN 2023-07-24 21:10:44,119 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:10:44,119 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:44,120 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:10:44,120 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:44,122 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:44,270 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 21:10:44,274 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:44,274 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233044274"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233044274"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233044274"}]},"ts":"1690233044274"} 2023-07-24 21:10:44,275 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:44,275 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044275"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233044275"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233044275"}]},"ts":"1690233044275"} 2023-07-24 21:10:44,275 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:44,276 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233044275"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233044275"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233044275"}]},"ts":"1690233044275"} 2023-07-24 21:10:44,276 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:44,276 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:44,276 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233044276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233044276"}]},"ts":"1690233044276"} 2023-07-24 21:10:44,276 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044276"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233044276"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233044276"}]},"ts":"1690233044276"} 2023-07-24 21:10:44,278 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE; OpenRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:44,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=21, state=RUNNABLE; OpenRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:44,283 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=22, state=RUNNABLE; OpenRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:44,283 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=19, state=RUNNABLE; OpenRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:44,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=20, state=RUNNABLE; OpenRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:44,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-24 21:10:44,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:44,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d1188490bf4ff5f0ca051ba710a55ee, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 21:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:44,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d0950f4e6f52cbf0f042339a231d7eff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 21:10:44,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:44,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,469 INFO [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,469 INFO [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,474 DEBUG [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/f 2023-07-24 21:10:44,474 DEBUG [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/f 2023-07-24 21:10:44,475 DEBUG [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/f 2023-07-24 21:10:44,475 DEBUG [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/f 2023-07-24 21:10:44,476 INFO [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d1188490bf4ff5f0ca051ba710a55ee columnFamilyName f 2023-07-24 21:10:44,477 INFO [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] regionserver.HStore(310): Store=9d1188490bf4ff5f0ca051ba710a55ee/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:44,477 INFO [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d0950f4e6f52cbf0f042339a231d7eff columnFamilyName f 2023-07-24 21:10:44,479 INFO [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] regionserver.HStore(310): Store=d0950f4e6f52cbf0f042339a231d7eff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:44,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:44,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:44,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:44,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d0950f4e6f52cbf0f042339a231d7eff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11560994880, jitterRate=0.07670155167579651}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:44,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d0950f4e6f52cbf0f042339a231d7eff: 2023-07-24 21:10:44,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:44,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d1188490bf4ff5f0ca051ba710a55ee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10789514880, jitterRate=0.004851877689361572}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:44,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d1188490bf4ff5f0ca051ba710a55ee: 2023-07-24 21:10:44,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee., pid=25, masterSystemTime=1690233044436 2023-07-24 21:10:44,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff., pid=28, masterSystemTime=1690233044436 2023-07-24 21:10:44,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:44,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:44,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:44,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c52d2ec88d6dbe4039a9f0da5976970, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 21:10:44,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:44,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,523 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:44,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:44,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:44,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:44,524 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044523"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233044523"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233044523"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233044523"}]},"ts":"1690233044523"} 2023-07-24 21:10:44,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d3248c46bad1693a300ed900f0a3bb2, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 21:10:44,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:44,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,525 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:44,533 INFO [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,533 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044525"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233044525"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233044525"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233044525"}]},"ts":"1690233044525"} 2023-07-24 21:10:44,530 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=20 2023-07-24 21:10:44,535 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=20, state=SUCCESS; OpenRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,40083,1690233037694 in 241 msec 2023-07-24 21:10:44,537 DEBUG [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/f 2023-07-24 21:10:44,542 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, ASSIGN in 432 msec 2023-07-24 21:10:44,540 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-24 21:10:44,538 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, ASSIGN in 422 msec 2023-07-24 21:10:44,544 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; OpenRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,43799,1690233041130 in 253 msec 2023-07-24 21:10:44,544 INFO [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,544 DEBUG [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/f 2023-07-24 21:10:44,545 INFO [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c52d2ec88d6dbe4039a9f0da5976970 columnFamilyName f 2023-07-24 21:10:44,546 DEBUG [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/f 2023-07-24 21:10:44,547 DEBUG [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/f 2023-07-24 21:10:44,547 INFO [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] regionserver.HStore(310): Store=4c52d2ec88d6dbe4039a9f0da5976970/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:44,548 INFO [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d3248c46bad1693a300ed900f0a3bb2 columnFamilyName f 2023-07-24 21:10:44,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,549 INFO [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] regionserver.HStore(310): Store=9d3248c46bad1693a300ed900f0a3bb2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:44,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:44,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:44,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c52d2ec88d6dbe4039a9f0da5976970; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11985871360, jitterRate=0.1162712574005127}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:44,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c52d2ec88d6dbe4039a9f0da5976970: 2023-07-24 21:10:44,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970., pid=26, masterSystemTime=1690233044436 2023-07-24 21:10:44,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:44,563 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:44,564 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:44,564 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233044564"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233044564"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233044564"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233044564"}]},"ts":"1690233044564"} 2023-07-24 21:10:44,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=22 2023-07-24 21:10:44,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=22, state=SUCCESS; OpenRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,40083,1690233037694 in 284 msec 2023-07-24 21:10:44,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:44,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d3248c46bad1693a300ed900f0a3bb2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9990197600, jitterRate=-0.06959034502506256}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:44,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d3248c46bad1693a300ed900f0a3bb2: 2023-07-24 21:10:44,579 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, ASSIGN in 461 msec 2023-07-24 21:10:44,579 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2., pid=27, masterSystemTime=1690233044436 2023-07-24 21:10:44,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:44,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:44,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb25f322fc71d3a1737fa27766ba99c0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 21:10:44,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:44,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,585 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:44,585 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233044585"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233044585"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233044585"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233044585"}]},"ts":"1690233044585"} 2023-07-24 21:10:44,590 INFO [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,594 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=19 2023-07-24 21:10:44,596 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=19, state=SUCCESS; OpenRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,43799,1690233041130 in 306 msec 2023-07-24 21:10:44,597 DEBUG [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/f 2023-07-24 21:10:44,597 DEBUG [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/f 2023-07-24 21:10:44,598 INFO [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb25f322fc71d3a1737fa27766ba99c0 columnFamilyName f 2023-07-24 21:10:44,598 INFO [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] regionserver.HStore(310): Store=eb25f322fc71d3a1737fa27766ba99c0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:44,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,601 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, ASSIGN in 488 msec 2023-07-24 21:10:44,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:44,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:44,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb25f322fc71d3a1737fa27766ba99c0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11695553760, jitterRate=0.08923332393169403}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:44,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb25f322fc71d3a1737fa27766ba99c0: 2023-07-24 21:10:44,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0., pid=24, masterSystemTime=1690233044436 2023-07-24 21:10:44,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:44,620 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:44,620 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233044620"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233044620"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233044620"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233044620"}]},"ts":"1690233044620"} 2023-07-24 21:10:44,627 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-24 21:10:44,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; OpenRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,43799,1690233041130 in 344 msec 2023-07-24 21:10:44,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=18 2023-07-24 21:10:44,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, ASSIGN in 519 msec 2023-07-24 21:10:44,635 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:44,635 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233044635"}]},"ts":"1690233044635"} 2023-07-24 21:10:44,640 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 21:10:44,645 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:44,649 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 922 msec 2023-07-24 21:10:44,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-24 21:10:44,858 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-24 21:10:44,858 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-24 21:10:44,859 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:44,860 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39543] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:60708 deadline: 1690233104860, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43799 startCode=1690233041130. As of locationSeqNum=15. 2023-07-24 21:10:44,963 DEBUG [hconnection-0x62d0debf-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:10:44,968 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35422, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:10:44,996 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-24 21:10:44,997 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:44,997 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-24 21:10:44,998 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:45,003 DEBUG [Listener at localhost/42247] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:45,006 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:45,009 DEBUG [Listener at localhost/42247] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:45,012 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60726, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:45,012 DEBUG [Listener at localhost/42247] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:45,014 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50578, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:45,015 DEBUG [Listener at localhost/42247] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:45,017 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35434, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:45,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:45,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:45,028 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:45,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:45,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:45,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 9d3248c46bad1693a300ed900f0a3bb2 to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:45,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:45,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:45,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:45,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:45,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, REOPEN/MOVE 2023-07-24 21:10:45,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region d0950f4e6f52cbf0f042339a231d7eff to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,048 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, REOPEN/MOVE 2023-07-24 21:10:45,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:45,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:45,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:45,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:45,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:45,049 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:45,049 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045049"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045049"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045049"}]},"ts":"1690233045049"} 2023-07-24 21:10:45,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, REOPEN/MOVE 2023-07-24 21:10:45,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 9d1188490bf4ff5f0ca051ba710a55ee to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,050 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, REOPEN/MOVE 2023-07-24 21:10:45,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:45,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:45,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:45,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:45,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:45,051 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:45,052 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045051"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045051"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045051"}]},"ts":"1690233045051"} 2023-07-24 21:10:45,052 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:45,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, REOPEN/MOVE 2023-07-24 21:10:45,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 4c52d2ec88d6dbe4039a9f0da5976970 to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,053 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, REOPEN/MOVE 2023-07-24 21:10:45,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:45,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:45,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:45,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:45,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:45,054 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:45,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, REOPEN/MOVE 2023-07-24 21:10:45,056 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:45,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region eb25f322fc71d3a1737fa27766ba99c0 to RSGroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:45,056 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045056"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045056"}]},"ts":"1690233045056"} 2023-07-24 21:10:45,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:45,057 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, REOPEN/MOVE 2023-07-24 21:10:45,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:45,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:45,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:45,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:45,058 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:45,059 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045058"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045058"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045058"}]},"ts":"1690233045058"} 2023-07-24 21:10:45,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, REOPEN/MOVE 2023-07-24 21:10:45,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_655290510, current retry=0 2023-07-24 21:10:45,060 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, REOPEN/MOVE 2023-07-24 21:10:45,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:45,062 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:45,062 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045062"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045062"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045062"}]},"ts":"1690233045062"} 2023-07-24 21:10:45,063 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:45,064 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=35, state=RUNNABLE; CloseRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:45,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d1188490bf4ff5f0ca051ba710a55ee, disabling compactions & flushes 2023-07-24 21:10:45,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. after waiting 0 ms 2023-07-24 21:10:45,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d0950f4e6f52cbf0f042339a231d7eff, disabling compactions & flushes 2023-07-24 21:10:45,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. after waiting 0 ms 2023-07-24 21:10:45,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:45,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:45,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d1188490bf4ff5f0ca051ba710a55ee: 2023-07-24 21:10:45,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9d1188490bf4ff5f0ca051ba710a55ee move to jenkins-hbase4.apache.org,39543,1690233037533 record at close sequenceid=2 2023-07-24 21:10:45,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d0950f4e6f52cbf0f042339a231d7eff: 2023-07-24 21:10:45,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d0950f4e6f52cbf0f042339a231d7eff move to jenkins-hbase4.apache.org,35829,1690233037637 record at close sequenceid=2 2023-07-24 21:10:45,229 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,229 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb25f322fc71d3a1737fa27766ba99c0, disabling compactions & flushes 2023-07-24 21:10:45,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. after waiting 0 ms 2023-07-24 21:10:45,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,231 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=CLOSED 2023-07-24 21:10:45,231 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045230"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233045230"}]},"ts":"1690233045230"} 2023-07-24 21:10:45,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c52d2ec88d6dbe4039a9f0da5976970, disabling compactions & flushes 2023-07-24 21:10:45,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. after waiting 0 ms 2023-07-24 21:10:45,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,233 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=CLOSED 2023-07-24 21:10:45,233 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045233"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233045233"}]},"ts":"1690233045233"} 2023-07-24 21:10:45,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:45,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb25f322fc71d3a1737fa27766ba99c0: 2023-07-24 21:10:45,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-24 21:10:45,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding eb25f322fc71d3a1737fa27766ba99c0 move to jenkins-hbase4.apache.org,39543,1690233037533 record at close sequenceid=2 2023-07-24 21:10:45,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,43799,1690233041130 in 175 msec 2023-07-24 21:10:45,243 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-24 21:10:45,243 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,40083,1690233037694 in 181 msec 2023-07-24 21:10:45,245 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:45,246 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:45,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:45,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c52d2ec88d6dbe4039a9f0da5976970: 2023-07-24 21:10:45,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4c52d2ec88d6dbe4039a9f0da5976970 move to jenkins-hbase4.apache.org,39543,1690233037533 record at close sequenceid=2 2023-07-24 21:10:45,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d3248c46bad1693a300ed900f0a3bb2, disabling compactions & flushes 2023-07-24 21:10:45,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. after waiting 0 ms 2023-07-24 21:10:45,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,251 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=CLOSED 2023-07-24 21:10:45,252 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045251"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233045251"}]},"ts":"1690233045251"} 2023-07-24 21:10:45,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,252 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=CLOSED 2023-07-24 21:10:45,253 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045252"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233045252"}]},"ts":"1690233045252"} 2023-07-24 21:10:45,259 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=35 2023-07-24 21:10:45,259 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=35, state=SUCCESS; CloseRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,43799,1690233041130 in 190 msec 2023-07-24 21:10:45,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-24 21:10:45,260 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:45,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,40083,1690233037694 in 192 msec 2023-07-24 21:10:45,261 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:45,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:45,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d3248c46bad1693a300ed900f0a3bb2: 2023-07-24 21:10:45,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9d3248c46bad1693a300ed900f0a3bb2 move to jenkins-hbase4.apache.org,39543,1690233037533 record at close sequenceid=2 2023-07-24 21:10:45,269 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=CLOSED 2023-07-24 21:10:45,269 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045269"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233045269"}]},"ts":"1690233045269"} 2023-07-24 21:10:45,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,276 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-24 21:10:45,276 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,43799,1690233041130 in 219 msec 2023-07-24 21:10:45,277 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:45,395 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 21:10:45,395 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,395 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:45,395 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,395 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,395 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,396 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045395"}]},"ts":"1690233045395"} 2023-07-24 21:10:45,396 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045395"}]},"ts":"1690233045395"} 2023-07-24 21:10:45,396 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045395"}]},"ts":"1690233045395"} 2023-07-24 21:10:45,396 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045395"}]},"ts":"1690233045395"} 2023-07-24 21:10:45,396 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233045395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233045395"}]},"ts":"1690233045395"} 2023-07-24 21:10:45,398 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=29, state=RUNNABLE; OpenRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:45,400 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=35, state=RUNNABLE; OpenRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:45,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:45,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=33, state=RUNNABLE; OpenRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:45,403 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=31, state=RUNNABLE; OpenRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:45,497 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 21:10:45,563 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:45,563 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:10:45,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d3248c46bad1693a300ed900f0a3bb2, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 21:10:45,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:45,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,602 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42714, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:10:45,607 INFO [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,610 DEBUG [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/f 2023-07-24 21:10:45,610 DEBUG [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/f 2023-07-24 21:10:45,610 INFO [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d3248c46bad1693a300ed900f0a3bb2 columnFamilyName f 2023-07-24 21:10:45,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d0950f4e6f52cbf0f042339a231d7eff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 21:10:45,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:45,611 INFO [StoreOpener-9d3248c46bad1693a300ed900f0a3bb2-1] regionserver.HStore(310): Store=9d3248c46bad1693a300ed900f0a3bb2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:45,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:45,624 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 21:10:45,626 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d3248c46bad1693a300ed900f0a3bb2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11192512000, jitterRate=0.04238390922546387}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:45,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d3248c46bad1693a300ed900f0a3bb2: 2023-07-24 21:10:45,632 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2., pid=39, masterSystemTime=1690233045551 2023-07-24 21:10:45,636 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,636 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045635"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233045635"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233045635"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233045635"}]},"ts":"1690233045635"} 2023-07-24 21:10:45,636 INFO [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,638 DEBUG [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/f 2023-07-24 21:10:45,638 DEBUG [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/f 2023-07-24 21:10:45,639 INFO [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d0950f4e6f52cbf0f042339a231d7eff columnFamilyName f 2023-07-24 21:10:45,643 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=29 2023-07-24 21:10:45,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=29, state=SUCCESS; OpenRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,39543,1690233037533 in 241 msec 2023-07-24 21:10:45,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:45,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb25f322fc71d3a1737fa27766ba99c0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 21:10:45,648 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, REOPEN/MOVE in 600 msec 2023-07-24 21:10:45,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:45,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,651 INFO [StoreOpener-d0950f4e6f52cbf0f042339a231d7eff-1] regionserver.HStore(310): Store=d0950f4e6f52cbf0f042339a231d7eff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:45,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,652 INFO [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,654 DEBUG [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/f 2023-07-24 21:10:45,654 DEBUG [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/f 2023-07-24 21:10:45,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,654 INFO [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb25f322fc71d3a1737fa27766ba99c0 columnFamilyName f 2023-07-24 21:10:45,655 INFO [StoreOpener-eb25f322fc71d3a1737fa27766ba99c0-1] regionserver.HStore(310): Store=eb25f322fc71d3a1737fa27766ba99c0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:45,655 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 21:10:45,657 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 21:10:45,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,657 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:10:45,657 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 21:10:45,657 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 21:10:45,657 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 21:10:45,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:45,661 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d0950f4e6f52cbf0f042339a231d7eff; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11067080800, jitterRate=0.030702218413352966}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:45,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d0950f4e6f52cbf0f042339a231d7eff: 2023-07-24 21:10:45,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff., pid=41, masterSystemTime=1690233045563 2023-07-24 21:10:45,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:45,668 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb25f322fc71d3a1737fa27766ba99c0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11021208800, jitterRate=0.026430055499076843}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:45,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb25f322fc71d3a1737fa27766ba99c0: 2023-07-24 21:10:45,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,669 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:45,670 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:45,670 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0., pid=40, masterSystemTime=1690233045551 2023-07-24 21:10:45,671 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045670"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233045670"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233045670"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233045670"}]},"ts":"1690233045670"} 2023-07-24 21:10:45,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:45,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d1188490bf4ff5f0ca051ba710a55ee, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 21:10:45,674 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,675 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233045674"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233045674"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233045674"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233045674"}]},"ts":"1690233045674"} 2023-07-24 21:10:45,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:45,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-24 21:10:45,681 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,35829,1690233037637 in 272 msec 2023-07-24 21:10:45,681 INFO [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,683 DEBUG [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/f 2023-07-24 21:10:45,683 DEBUG [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/f 2023-07-24 21:10:45,683 INFO [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d1188490bf4ff5f0ca051ba710a55ee columnFamilyName f 2023-07-24 21:10:45,684 INFO [StoreOpener-9d1188490bf4ff5f0ca051ba710a55ee-1] regionserver.HStore(310): Store=9d1188490bf4ff5f0ca051ba710a55ee/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:45,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, REOPEN/MOVE in 633 msec 2023-07-24 21:10:45,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=35 2023-07-24 21:10:45,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=35, state=SUCCESS; OpenRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,39543,1690233037533 in 277 msec 2023-07-24 21:10:45,695 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, REOPEN/MOVE in 636 msec 2023-07-24 21:10:45,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:45,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d1188490bf4ff5f0ca051ba710a55ee; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9523636480, jitterRate=-0.11304223537445068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:45,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d1188490bf4ff5f0ca051ba710a55ee: 2023-07-24 21:10:45,698 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee., pid=43, masterSystemTime=1690233045551 2023-07-24 21:10:45,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,700 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:45,700 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c52d2ec88d6dbe4039a9f0da5976970, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 21:10:45,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:45,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,702 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,702 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045702"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233045702"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233045702"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233045702"}]},"ts":"1690233045702"} 2023-07-24 21:10:45,706 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=31 2023-07-24 21:10:45,706 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=31, state=SUCCESS; OpenRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,39543,1690233037533 in 301 msec 2023-07-24 21:10:45,706 INFO [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,708 DEBUG [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/f 2023-07-24 21:10:45,708 DEBUG [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/f 2023-07-24 21:10:45,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, REOPEN/MOVE in 655 msec 2023-07-24 21:10:45,708 INFO [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c52d2ec88d6dbe4039a9f0da5976970 columnFamilyName f 2023-07-24 21:10:45,709 INFO [StoreOpener-4c52d2ec88d6dbe4039a9f0da5976970-1] regionserver.HStore(310): Store=4c52d2ec88d6dbe4039a9f0da5976970/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:45,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:45,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c52d2ec88d6dbe4039a9f0da5976970; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10108973280, jitterRate=-0.05852849781513214}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:45,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c52d2ec88d6dbe4039a9f0da5976970: 2023-07-24 21:10:45,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970., pid=42, masterSystemTime=1690233045551 2023-07-24 21:10:45,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:45,721 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:45,721 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233045721"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233045721"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233045721"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233045721"}]},"ts":"1690233045721"} 2023-07-24 21:10:45,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=33 2023-07-24 21:10:45,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=33, state=SUCCESS; OpenRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,39543,1690233037533 in 321 msec 2023-07-24 21:10:45,729 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, REOPEN/MOVE in 672 msec 2023-07-24 21:10:46,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-24 21:10:46,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_655290510. 2023-07-24 21:10:46,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:46,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:46,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:46,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:46,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:46,068 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:46,074 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:46,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:46,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:46,092 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233046091"}]},"ts":"1690233046091"} 2023-07-24 21:10:46,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-24 21:10:46,093 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 21:10:46,095 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 21:10:46,100 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, UNASSIGN}] 2023-07-24 21:10:46,103 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, UNASSIGN 2023-07-24 21:10:46,103 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, UNASSIGN 2023-07-24 21:10:46,103 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, UNASSIGN 2023-07-24 21:10:46,104 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, UNASSIGN 2023-07-24 21:10:46,104 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, UNASSIGN 2023-07-24 21:10:46,104 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,104 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046104"}]},"ts":"1690233046104"} 2023-07-24 21:10:46,105 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,105 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046105"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046105"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046105"}]},"ts":"1690233046105"} 2023-07-24 21:10:46,106 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:46,106 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,106 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046106"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046106"}]},"ts":"1690233046106"} 2023-07-24 21:10:46,106 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,106 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046106"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046106"}]},"ts":"1690233046106"} 2023-07-24 21:10:46,106 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046106"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046106"}]},"ts":"1690233046106"} 2023-07-24 21:10:46,109 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=48, state=RUNNABLE; CloseRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:46,111 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=49, state=RUNNABLE; CloseRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:46,112 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=46, state=RUNNABLE; CloseRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:46,114 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=47, state=RUNNABLE; CloseRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:46,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=45, state=RUNNABLE; CloseRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:46,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-24 21:10:46,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:46,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d1188490bf4ff5f0ca051ba710a55ee, disabling compactions & flushes 2023-07-24 21:10:46,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:46,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:46,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. after waiting 0 ms 2023-07-24 21:10:46,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:46,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:46,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d0950f4e6f52cbf0f042339a231d7eff, disabling compactions & flushes 2023-07-24 21:10:46,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:46,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:46,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. after waiting 0 ms 2023-07-24 21:10:46,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:46,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:46,270 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee. 2023-07-24 21:10:46,271 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d1188490bf4ff5f0ca051ba710a55ee: 2023-07-24 21:10:46,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:46,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:46,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:46,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c52d2ec88d6dbe4039a9f0da5976970, disabling compactions & flushes 2023-07-24 21:10:46,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:46,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:46,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. after waiting 0 ms 2023-07-24 21:10:46,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:46,274 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=9d1188490bf4ff5f0ca051ba710a55ee, regionState=CLOSED 2023-07-24 21:10:46,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff. 2023-07-24 21:10:46,275 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046274"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046274"}]},"ts":"1690233046274"} 2023-07-24 21:10:46,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d0950f4e6f52cbf0f042339a231d7eff: 2023-07-24 21:10:46,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:46,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:46,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970. 2023-07-24 21:10:46,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c52d2ec88d6dbe4039a9f0da5976970: 2023-07-24 21:10:46,282 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=d0950f4e6f52cbf0f042339a231d7eff, regionState=CLOSED 2023-07-24 21:10:46,282 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046282"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046282"}]},"ts":"1690233046282"} 2023-07-24 21:10:46,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:46,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:46,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d3248c46bad1693a300ed900f0a3bb2, disabling compactions & flushes 2023-07-24 21:10:46,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:46,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:46,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. after waiting 0 ms 2023-07-24 21:10:46,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:46,284 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=4c52d2ec88d6dbe4039a9f0da5976970, regionState=CLOSED 2023-07-24 21:10:46,284 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046284"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046284"}]},"ts":"1690233046284"} 2023-07-24 21:10:46,285 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=47 2023-07-24 21:10:46,285 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=47, state=SUCCESS; CloseRegionProcedure 9d1188490bf4ff5f0ca051ba710a55ee, server=jenkins-hbase4.apache.org,39543,1690233037533 in 163 msec 2023-07-24 21:10:46,287 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d1188490bf4ff5f0ca051ba710a55ee, UNASSIGN in 188 msec 2023-07-24 21:10:46,292 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:46,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2. 2023-07-24 21:10:46,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d3248c46bad1693a300ed900f0a3bb2: 2023-07-24 21:10:46,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:46,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:46,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb25f322fc71d3a1737fa27766ba99c0, disabling compactions & flushes 2023-07-24 21:10:46,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:46,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:46,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. after waiting 0 ms 2023-07-24 21:10:46,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:46,296 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9d3248c46bad1693a300ed900f0a3bb2, regionState=CLOSED 2023-07-24 21:10:46,296 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046296"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046296"}]},"ts":"1690233046296"} 2023-07-24 21:10:46,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=46 2023-07-24 21:10:46,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=48 2023-07-24 21:10:46,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=46, state=SUCCESS; CloseRegionProcedure d0950f4e6f52cbf0f042339a231d7eff, server=jenkins-hbase4.apache.org,35829,1690233037637 in 186 msec 2023-07-24 21:10:46,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=48, state=SUCCESS; CloseRegionProcedure 4c52d2ec88d6dbe4039a9f0da5976970, server=jenkins-hbase4.apache.org,39543,1690233037533 in 190 msec 2023-07-24 21:10:46,304 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=45 2023-07-24 21:10:46,304 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; CloseRegionProcedure 9d3248c46bad1693a300ed900f0a3bb2, server=jenkins-hbase4.apache.org,39543,1690233037533 in 186 msec 2023-07-24 21:10:46,305 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d0950f4e6f52cbf0f042339a231d7eff, UNASSIGN in 206 msec 2023-07-24 21:10:46,305 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c52d2ec88d6dbe4039a9f0da5976970, UNASSIGN in 206 msec 2023-07-24 21:10:46,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9d3248c46bad1693a300ed900f0a3bb2, UNASSIGN in 208 msec 2023-07-24 21:10:46,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:46,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0. 2023-07-24 21:10:46,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb25f322fc71d3a1737fa27766ba99c0: 2023-07-24 21:10:46,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:46,313 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=eb25f322fc71d3a1737fa27766ba99c0, regionState=CLOSED 2023-07-24 21:10:46,313 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046313"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046313"}]},"ts":"1690233046313"} 2023-07-24 21:10:46,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=49 2023-07-24 21:10:46,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; CloseRegionProcedure eb25f322fc71d3a1737fa27766ba99c0, server=jenkins-hbase4.apache.org,39543,1690233037533 in 204 msec 2023-07-24 21:10:46,319 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=44 2023-07-24 21:10:46,319 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eb25f322fc71d3a1737fa27766ba99c0, UNASSIGN in 220 msec 2023-07-24 21:10:46,320 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233046319"}]},"ts":"1690233046319"} 2023-07-24 21:10:46,321 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 21:10:46,323 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 21:10:46,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 242 msec 2023-07-24 21:10:46,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-24 21:10:46,396 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-24 21:10:46,397 INFO [Listener at localhost/42247] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:46,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:46,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-24 21:10:46,413 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-24 21:10:46,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-24 21:10:46,426 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:46,426 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:46,426 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:46,426 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:46,426 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:46,430 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/recovered.edits] 2023-07-24 21:10:46,430 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/recovered.edits] 2023-07-24 21:10:46,430 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/recovered.edits] 2023-07-24 21:10:46,430 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/recovered.edits] 2023-07-24 21:10:46,430 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/recovered.edits] 2023-07-24 21:10:46,445 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2/recovered.edits/7.seqid 2023-07-24 21:10:46,445 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0/recovered.edits/7.seqid 2023-07-24 21:10:46,445 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970/recovered.edits/7.seqid 2023-07-24 21:10:46,446 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff/recovered.edits/7.seqid 2023-07-24 21:10:46,446 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee/recovered.edits/7.seqid 2023-07-24 21:10:46,447 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d3248c46bad1693a300ed900f0a3bb2 2023-07-24 21:10:46,447 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eb25f322fc71d3a1737fa27766ba99c0 2023-07-24 21:10:46,448 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c52d2ec88d6dbe4039a9f0da5976970 2023-07-24 21:10:46,448 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d0950f4e6f52cbf0f042339a231d7eff 2023-07-24 21:10:46,448 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9d1188490bf4ff5f0ca051ba710a55ee 2023-07-24 21:10:46,448 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 21:10:46,474 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 21:10:46,478 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 21:10:46,479 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 21:10:46,479 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233046479"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:46,479 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233046479"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:46,479 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233046479"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:46,479 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233046479"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:46,479 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233046479"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:46,482 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 21:10:46,482 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9d3248c46bad1693a300ed900f0a3bb2, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233043720.9d3248c46bad1693a300ed900f0a3bb2.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => d0950f4e6f52cbf0f042339a231d7eff, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233043720.d0950f4e6f52cbf0f042339a231d7eff.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 9d1188490bf4ff5f0ca051ba710a55ee, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233043720.9d1188490bf4ff5f0ca051ba710a55ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 4c52d2ec88d6dbe4039a9f0da5976970, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233043720.4c52d2ec88d6dbe4039a9f0da5976970.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => eb25f322fc71d3a1737fa27766ba99c0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233043720.eb25f322fc71d3a1737fa27766ba99c0.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 21:10:46,482 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 21:10:46,482 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233046482"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:46,484 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 21:10:46,490 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,490 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,490 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,490 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,490 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:46,491 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 empty. 2023-07-24 21:10:46,491 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 empty. 2023-07-24 21:10:46,491 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a empty. 2023-07-24 21:10:46,491 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 empty. 2023-07-24 21:10:46,491 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b empty. 2023-07-24 21:10:46,491 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,492 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,492 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,492 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,492 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:46,492 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 21:10:46,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-24 21:10:46,520 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:46,522 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 73987e3428ac5e236f32114a3dce9c7b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:46,522 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => a3ba1c9ac6a9cb45da05c636b11da233, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:46,522 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 418225f0f5eb351ffffdb9dd6769e707, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:46,568 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 73987e3428ac5e236f32114a3dce9c7b, disabling compactions & flushes 2023-07-24 21:10:46,569 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:46,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:46,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. after waiting 0 ms 2023-07-24 21:10:46,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:46,569 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:46,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 73987e3428ac5e236f32114a3dce9c7b: 2023-07-24 21:10:46,569 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => bd7100f9a4ce2652a16a27eeab4548b2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:46,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 418225f0f5eb351ffffdb9dd6769e707, disabling compactions & flushes 2023-07-24 21:10:46,589 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. after waiting 0 ms 2023-07-24 21:10:46,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,589 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 418225f0f5eb351ffffdb9dd6769e707: 2023-07-24 21:10:46,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => de5985c07402b34d127afc486a9c893a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:46,594 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,594 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing a3ba1c9ac6a9cb45da05c636b11da233, disabling compactions & flushes 2023-07-24 21:10:46,594 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:46,594 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:46,594 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. after waiting 0 ms 2023-07-24 21:10:46,595 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:46,595 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:46,595 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for a3ba1c9ac6a9cb45da05c636b11da233: 2023-07-24 21:10:46,599 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,599 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing bd7100f9a4ce2652a16a27eeab4548b2, disabling compactions & flushes 2023-07-24 21:10:46,599 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,599 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,599 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. after waiting 0 ms 2023-07-24 21:10:46,599 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,599 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,599 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for bd7100f9a4ce2652a16a27eeab4548b2: 2023-07-24 21:10:46,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing de5985c07402b34d127afc486a9c893a, disabling compactions & flushes 2023-07-24 21:10:46,608 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. after waiting 0 ms 2023-07-24 21:10:46,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,608 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,608 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for de5985c07402b34d127afc486a9c893a: 2023-07-24 21:10:46,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046616"}]},"ts":"1690233046616"} 2023-07-24 21:10:46,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046616"}]},"ts":"1690233046616"} 2023-07-24 21:10:46,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046616"}]},"ts":"1690233046616"} 2023-07-24 21:10:46,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046616"}]},"ts":"1690233046616"} 2023-07-24 21:10:46,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233046616"}]},"ts":"1690233046616"} 2023-07-24 21:10:46,620 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 21:10:46,621 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233046621"}]},"ts":"1690233046621"} 2023-07-24 21:10:46,623 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 21:10:46,628 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:46,628 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:46,628 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:46,628 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:46,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, ASSIGN}] 2023-07-24 21:10:46,631 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, ASSIGN 2023-07-24 21:10:46,631 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, ASSIGN 2023-07-24 21:10:46,631 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, ASSIGN 2023-07-24 21:10:46,631 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, ASSIGN 2023-07-24 21:10:46,631 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, ASSIGN 2023-07-24 21:10:46,632 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:46,633 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:46,633 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:46,633 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:46,633 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:46,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-24 21:10:46,782 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 21:10:46,786 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=418225f0f5eb351ffffdb9dd6769e707, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,786 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=73987e3428ac5e236f32114a3dce9c7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:46,786 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=de5985c07402b34d127afc486a9c893a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:46,786 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=bd7100f9a4ce2652a16a27eeab4548b2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,786 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=a3ba1c9ac6a9cb45da05c636b11da233, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:46,786 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046785"}]},"ts":"1690233046785"} 2023-07-24 21:10:46,786 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046785"}]},"ts":"1690233046785"} 2023-07-24 21:10:46,786 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046786"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046786"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046786"}]},"ts":"1690233046786"} 2023-07-24 21:10:46,786 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046785"}]},"ts":"1690233046785"} 2023-07-24 21:10:46,786 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233046785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233046785"}]},"ts":"1690233046785"} 2023-07-24 21:10:46,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure de5985c07402b34d127afc486a9c893a, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:46,791 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=59, state=RUNNABLE; OpenRegionProcedure bd7100f9a4ce2652a16a27eeab4548b2, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:46,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=57, state=RUNNABLE; OpenRegionProcedure 73987e3428ac5e236f32114a3dce9c7b, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:46,794 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=56, state=RUNNABLE; OpenRegionProcedure 418225f0f5eb351ffffdb9dd6769e707, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:46,799 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=58, state=RUNNABLE; OpenRegionProcedure a3ba1c9ac6a9cb45da05c636b11da233, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:46,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de5985c07402b34d127afc486a9c893a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 21:10:46,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bd7100f9a4ce2652a16a27eeab4548b2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 21:10:46,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,959 INFO [StoreOpener-de5985c07402b34d127afc486a9c893a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,961 INFO [StoreOpener-bd7100f9a4ce2652a16a27eeab4548b2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,964 DEBUG [StoreOpener-de5985c07402b34d127afc486a9c893a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/f 2023-07-24 21:10:46,964 DEBUG [StoreOpener-bd7100f9a4ce2652a16a27eeab4548b2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/f 2023-07-24 21:10:46,964 DEBUG [StoreOpener-bd7100f9a4ce2652a16a27eeab4548b2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/f 2023-07-24 21:10:46,964 DEBUG [StoreOpener-de5985c07402b34d127afc486a9c893a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/f 2023-07-24 21:10:46,964 INFO [StoreOpener-bd7100f9a4ce2652a16a27eeab4548b2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bd7100f9a4ce2652a16a27eeab4548b2 columnFamilyName f 2023-07-24 21:10:46,965 INFO [StoreOpener-de5985c07402b34d127afc486a9c893a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de5985c07402b34d127afc486a9c893a columnFamilyName f 2023-07-24 21:10:46,965 INFO [StoreOpener-bd7100f9a4ce2652a16a27eeab4548b2-1] regionserver.HStore(310): Store=bd7100f9a4ce2652a16a27eeab4548b2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:46,965 INFO [StoreOpener-de5985c07402b34d127afc486a9c893a-1] regionserver.HStore(310): Store=de5985c07402b34d127afc486a9c893a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:46,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:46,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:46,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:46,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:46,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bd7100f9a4ce2652a16a27eeab4548b2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10270610720, jitterRate=-0.04347483813762665}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:46,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de5985c07402b34d127afc486a9c893a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11267278720, jitterRate=0.049347102642059326}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:46,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bd7100f9a4ce2652a16a27eeab4548b2: 2023-07-24 21:10:46,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de5985c07402b34d127afc486a9c893a: 2023-07-24 21:10:46,977 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2., pid=62, masterSystemTime=1690233046946 2023-07-24 21:10:46,978 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a., pid=61, masterSystemTime=1690233046943 2023-07-24 21:10:46,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:46,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 418225f0f5eb351ffffdb9dd6769e707, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 21:10:46,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,981 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=bd7100f9a4ce2652a16a27eeab4548b2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,981 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:46,981 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:46,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73987e3428ac5e236f32114a3dce9c7b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 21:10:46,981 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=de5985c07402b34d127afc486a9c893a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:46,982 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046981"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233046981"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233046981"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233046981"}]},"ts":"1690233046981"} 2023-07-24 21:10:46,981 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233046980"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233046980"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233046980"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233046980"}]},"ts":"1690233046980"} 2023-07-24 21:10:46,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,981 INFO [StoreOpener-418225f0f5eb351ffffdb9dd6769e707-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:46,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,985 DEBUG [StoreOpener-418225f0f5eb351ffffdb9dd6769e707-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/f 2023-07-24 21:10:46,985 DEBUG [StoreOpener-418225f0f5eb351ffffdb9dd6769e707-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/f 2023-07-24 21:10:46,985 INFO [StoreOpener-418225f0f5eb351ffffdb9dd6769e707-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 418225f0f5eb351ffffdb9dd6769e707 columnFamilyName f 2023-07-24 21:10:46,985 INFO [StoreOpener-73987e3428ac5e236f32114a3dce9c7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,986 INFO [StoreOpener-418225f0f5eb351ffffdb9dd6769e707-1] regionserver.HStore(310): Store=418225f0f5eb351ffffdb9dd6769e707/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:46,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,987 DEBUG [StoreOpener-73987e3428ac5e236f32114a3dce9c7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/f 2023-07-24 21:10:46,987 DEBUG [StoreOpener-73987e3428ac5e236f32114a3dce9c7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/f 2023-07-24 21:10:46,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,988 INFO [StoreOpener-73987e3428ac5e236f32114a3dce9c7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73987e3428ac5e236f32114a3dce9c7b columnFamilyName f 2023-07-24 21:10:46,988 INFO [StoreOpener-73987e3428ac5e236f32114a3dce9c7b-1] regionserver.HStore(310): Store=73987e3428ac5e236f32114a3dce9c7b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:46,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:46,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-24 21:10:46,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure de5985c07402b34d127afc486a9c893a, server=jenkins-hbase4.apache.org,35829,1690233037637 in 194 msec 2023-07-24 21:10:46,994 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=59 2023-07-24 21:10:46,994 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=59, state=SUCCESS; OpenRegionProcedure bd7100f9a4ce2652a16a27eeab4548b2, server=jenkins-hbase4.apache.org,39543,1690233037533 in 194 msec 2023-07-24 21:10:46,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:46,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:46,995 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, ASSIGN in 365 msec 2023-07-24 21:10:46,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 418225f0f5eb351ffffdb9dd6769e707; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9879040960, jitterRate=-0.07994261384010315}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:46,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 418225f0f5eb351ffffdb9dd6769e707: 2023-07-24 21:10:46,996 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, ASSIGN in 366 msec 2023-07-24 21:10:46,997 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707., pid=64, masterSystemTime=1690233046946 2023-07-24 21:10:46,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,998 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:46,999 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=418225f0f5eb351ffffdb9dd6769e707, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:46,999 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233046999"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233046999"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233046999"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233046999"}]},"ts":"1690233046999"} 2023-07-24 21:10:47,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:47,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73987e3428ac5e236f32114a3dce9c7b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9584701760, jitterRate=-0.10735508799552917}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:47,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73987e3428ac5e236f32114a3dce9c7b: 2023-07-24 21:10:47,003 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b., pid=63, masterSystemTime=1690233046943 2023-07-24 21:10:47,004 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=56 2023-07-24 21:10:47,004 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=56, state=SUCCESS; OpenRegionProcedure 418225f0f5eb351ffffdb9dd6769e707, server=jenkins-hbase4.apache.org,39543,1690233037533 in 207 msec 2023-07-24 21:10:47,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:47,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:47,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a3ba1c9ac6a9cb45da05c636b11da233, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 21:10:47,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:47,006 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=73987e3428ac5e236f32114a3dce9c7b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:47,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, ASSIGN in 376 msec 2023-07-24 21:10:47,006 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047005"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233047005"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233047005"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233047005"}]},"ts":"1690233047005"} 2023-07-24 21:10:47,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=57 2023-07-24 21:10:47,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=57, state=SUCCESS; OpenRegionProcedure 73987e3428ac5e236f32114a3dce9c7b, server=jenkins-hbase4.apache.org,35829,1690233037637 in 215 msec 2023-07-24 21:10:47,013 INFO [StoreOpener-a3ba1c9ac6a9cb45da05c636b11da233-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,015 DEBUG [StoreOpener-a3ba1c9ac6a9cb45da05c636b11da233-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/f 2023-07-24 21:10:47,015 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, ASSIGN in 382 msec 2023-07-24 21:10:47,015 DEBUG [StoreOpener-a3ba1c9ac6a9cb45da05c636b11da233-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/f 2023-07-24 21:10:47,016 INFO [StoreOpener-a3ba1c9ac6a9cb45da05c636b11da233-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a3ba1c9ac6a9cb45da05c636b11da233 columnFamilyName f 2023-07-24 21:10:47,017 INFO [StoreOpener-a3ba1c9ac6a9cb45da05c636b11da233-1] regionserver.HStore(310): Store=a3ba1c9ac6a9cb45da05c636b11da233/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:47,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-24 21:10:47,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:47,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a3ba1c9ac6a9cb45da05c636b11da233; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11531496480, jitterRate=0.07395429909229279}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:47,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a3ba1c9ac6a9cb45da05c636b11da233: 2023-07-24 21:10:47,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233., pid=65, masterSystemTime=1690233046943 2023-07-24 21:10:47,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,036 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=a3ba1c9ac6a9cb45da05c636b11da233, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:47,036 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047036"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233047036"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233047036"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233047036"}]},"ts":"1690233047036"} 2023-07-24 21:10:47,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=58 2023-07-24 21:10:47,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=58, state=SUCCESS; OpenRegionProcedure a3ba1c9ac6a9cb45da05c636b11da233, server=jenkins-hbase4.apache.org,35829,1690233037637 in 242 msec 2023-07-24 21:10:47,043 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-24 21:10:47,044 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233047043"}]},"ts":"1690233047043"} 2023-07-24 21:10:47,044 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, ASSIGN in 413 msec 2023-07-24 21:10:47,045 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 21:10:47,048 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-24 21:10:47,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 644 msec 2023-07-24 21:10:47,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-24 21:10:47,523 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-24 21:10:47,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:47,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:47,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:47,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:47,527 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-24 21:10:47,532 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233047532"}]},"ts":"1690233047532"} 2023-07-24 21:10:47,534 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 21:10:47,535 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 21:10:47,536 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, UNASSIGN}] 2023-07-24 21:10:47,538 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, UNASSIGN 2023-07-24 21:10:47,539 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, UNASSIGN 2023-07-24 21:10:47,539 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, UNASSIGN 2023-07-24 21:10:47,539 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, UNASSIGN 2023-07-24 21:10:47,540 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, UNASSIGN 2023-07-24 21:10:47,544 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=73987e3428ac5e236f32114a3dce9c7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:47,544 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=bd7100f9a4ce2652a16a27eeab4548b2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:47,544 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=418225f0f5eb351ffffdb9dd6769e707, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:47,544 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047544"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233047544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233047544"}]},"ts":"1690233047544"} 2023-07-24 21:10:47,544 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047544"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233047544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233047544"}]},"ts":"1690233047544"} 2023-07-24 21:10:47,544 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=a3ba1c9ac6a9cb45da05c636b11da233, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:47,544 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=de5985c07402b34d127afc486a9c893a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:47,544 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047544"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233047544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233047544"}]},"ts":"1690233047544"} 2023-07-24 21:10:47,544 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233047544"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233047544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233047544"}]},"ts":"1690233047544"} 2023-07-24 21:10:47,545 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233047544"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233047544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233047544"}]},"ts":"1690233047544"} 2023-07-24 21:10:47,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=70, state=RUNNABLE; CloseRegionProcedure bd7100f9a4ce2652a16a27eeab4548b2, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:47,547 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=68, state=RUNNABLE; CloseRegionProcedure 73987e3428ac5e236f32114a3dce9c7b, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:47,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=69, state=RUNNABLE; CloseRegionProcedure a3ba1c9ac6a9cb45da05c636b11da233, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:47,549 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=67, state=RUNNABLE; CloseRegionProcedure 418225f0f5eb351ffffdb9dd6769e707, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:47,550 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure de5985c07402b34d127afc486a9c893a, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:47,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-24 21:10:47,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:47,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 418225f0f5eb351ffffdb9dd6769e707, disabling compactions & flushes 2023-07-24 21:10:47,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:47,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:47,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. after waiting 0 ms 2023-07-24 21:10:47,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:47,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:47,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de5985c07402b34d127afc486a9c893a, disabling compactions & flushes 2023-07-24 21:10:47,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:47,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:47,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. after waiting 0 ms 2023-07-24 21:10:47,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:47,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:47,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707. 2023-07-24 21:10:47,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 418225f0f5eb351ffffdb9dd6769e707: 2023-07-24 21:10:47,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:47,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:47,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bd7100f9a4ce2652a16a27eeab4548b2, disabling compactions & flushes 2023-07-24 21:10:47,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:47,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:47,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. after waiting 0 ms 2023-07-24 21:10:47,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:47,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:47,718 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=418225f0f5eb351ffffdb9dd6769e707, regionState=CLOSED 2023-07-24 21:10:47,718 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233047718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233047718"}]},"ts":"1690233047718"} 2023-07-24 21:10:47,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a. 2023-07-24 21:10:47,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de5985c07402b34d127afc486a9c893a: 2023-07-24 21:10:47,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:47,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2. 2023-07-24 21:10:47,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bd7100f9a4ce2652a16a27eeab4548b2: 2023-07-24 21:10:47,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:47,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a3ba1c9ac6a9cb45da05c636b11da233, disabling compactions & flushes 2023-07-24 21:10:47,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. after waiting 0 ms 2023-07-24 21:10:47,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,723 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=de5985c07402b34d127afc486a9c893a, regionState=CLOSED 2023-07-24 21:10:47,723 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690233047723"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233047723"}]},"ts":"1690233047723"} 2023-07-24 21:10:47,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:47,724 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=bd7100f9a4ce2652a16a27eeab4548b2, regionState=CLOSED 2023-07-24 21:10:47,724 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047724"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233047724"}]},"ts":"1690233047724"} 2023-07-24 21:10:47,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=67 2023-07-24 21:10:47,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=67, state=SUCCESS; CloseRegionProcedure 418225f0f5eb351ffffdb9dd6769e707, server=jenkins-hbase4.apache.org,39543,1690233037533 in 171 msec 2023-07-24 21:10:47,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=418225f0f5eb351ffffdb9dd6769e707, UNASSIGN in 189 msec 2023-07-24 21:10:47,727 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-24 21:10:47,727 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure de5985c07402b34d127afc486a9c893a, server=jenkins-hbase4.apache.org,35829,1690233037637 in 176 msec 2023-07-24 21:10:47,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:47,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233. 2023-07-24 21:10:47,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a3ba1c9ac6a9cb45da05c636b11da233: 2023-07-24 21:10:47,729 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=70 2023-07-24 21:10:47,729 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de5985c07402b34d127afc486a9c893a, UNASSIGN in 191 msec 2023-07-24 21:10:47,729 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=70, state=SUCCESS; CloseRegionProcedure bd7100f9a4ce2652a16a27eeab4548b2, server=jenkins-hbase4.apache.org,39543,1690233037533 in 180 msec 2023-07-24 21:10:47,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:47,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73987e3428ac5e236f32114a3dce9c7b, disabling compactions & flushes 2023-07-24 21:10:47,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:47,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:47,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd7100f9a4ce2652a16a27eeab4548b2, UNASSIGN in 193 msec 2023-07-24 21:10:47,732 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=a3ba1c9ac6a9cb45da05c636b11da233, regionState=CLOSED 2023-07-24 21:10:47,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. after waiting 0 ms 2023-07-24 21:10:47,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:47,732 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047732"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233047732"}]},"ts":"1690233047732"} 2023-07-24 21:10:47,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:47,736 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=69 2023-07-24 21:10:47,736 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; CloseRegionProcedure a3ba1c9ac6a9cb45da05c636b11da233, server=jenkins-hbase4.apache.org,35829,1690233037637 in 186 msec 2023-07-24 21:10:47,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b. 2023-07-24 21:10:47,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73987e3428ac5e236f32114a3dce9c7b: 2023-07-24 21:10:47,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a3ba1c9ac6a9cb45da05c636b11da233, UNASSIGN in 200 msec 2023-07-24 21:10:47,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:47,740 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=73987e3428ac5e236f32114a3dce9c7b, regionState=CLOSED 2023-07-24 21:10:47,740 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690233047740"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233047740"}]},"ts":"1690233047740"} 2023-07-24 21:10:47,743 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=68 2023-07-24 21:10:47,743 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=68, state=SUCCESS; CloseRegionProcedure 73987e3428ac5e236f32114a3dce9c7b, server=jenkins-hbase4.apache.org,35829,1690233037637 in 194 msec 2023-07-24 21:10:47,746 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=66 2023-07-24 21:10:47,746 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73987e3428ac5e236f32114a3dce9c7b, UNASSIGN in 208 msec 2023-07-24 21:10:47,748 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233047747"}]},"ts":"1690233047747"} 2023-07-24 21:10:47,749 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 21:10:47,751 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 21:10:47,760 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 227 msec 2023-07-24 21:10:47,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-24 21:10:47,836 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-24 21:10:47,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,855 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_655290510' 2023-07-24 21:10:47,856 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:47,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:47,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:47,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:47,875 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:47,875 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:47,875 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:47,875 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,875 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:47,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-24 21:10:47,880 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/recovered.edits] 2023-07-24 21:10:47,880 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/recovered.edits] 2023-07-24 21:10:47,880 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/recovered.edits] 2023-07-24 21:10:47,880 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/recovered.edits] 2023-07-24 21:10:47,880 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/recovered.edits] 2023-07-24 21:10:47,891 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b/recovered.edits/4.seqid 2023-07-24 21:10:47,891 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707/recovered.edits/4.seqid 2023-07-24 21:10:47,891 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a/recovered.edits/4.seqid 2023-07-24 21:10:47,891 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2/recovered.edits/4.seqid 2023-07-24 21:10:47,892 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233/recovered.edits/4.seqid 2023-07-24 21:10:47,892 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73987e3428ac5e236f32114a3dce9c7b 2023-07-24 21:10:47,892 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de5985c07402b34d127afc486a9c893a 2023-07-24 21:10:47,893 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/418225f0f5eb351ffffdb9dd6769e707 2023-07-24 21:10:47,893 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a3ba1c9ac6a9cb45da05c636b11da233 2023-07-24 21:10:47,893 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd7100f9a4ce2652a16a27eeab4548b2 2023-07-24 21:10:47,893 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 21:10:47,897 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,904 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 21:10:47,907 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 21:10:47,908 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,908 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 21:10:47,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233047908"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:47,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233047908"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:47,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233047908"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:47,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233047908"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:47,909 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233047908"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:47,911 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 21:10:47,911 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 418225f0f5eb351ffffdb9dd6769e707, NAME => 'Group_testTableMoveTruncateAndDrop,,1690233046450.418225f0f5eb351ffffdb9dd6769e707.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 73987e3428ac5e236f32114a3dce9c7b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690233046450.73987e3428ac5e236f32114a3dce9c7b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => a3ba1c9ac6a9cb45da05c636b11da233, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690233046450.a3ba1c9ac6a9cb45da05c636b11da233.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => bd7100f9a4ce2652a16a27eeab4548b2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690233046450.bd7100f9a4ce2652a16a27eeab4548b2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => de5985c07402b34d127afc486a9c893a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690233046450.de5985c07402b34d127afc486a9c893a.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 21:10:47,911 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 21:10:47,911 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233047911"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:47,913 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 21:10:47,915 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 21:10:47,916 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 71 msec 2023-07-24 21:10:47,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-24 21:10:47,980 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-24 21:10:47,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:47,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:47,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:47,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:47,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:47,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:47,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:47,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:10:47,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:47,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:47,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:47,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:47,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_655290510, current retry=0 2023-07-24 21:10:47,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:48,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_655290510 => default 2023-07-24 21:10:48,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:48,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_655290510 2023-07-24 21:10:48,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:10:48,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:48,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:48,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:48,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:48,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:48,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:48,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:48,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:48,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:48,029 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:48,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:48,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:48,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:48,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:48,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234248061, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:48,063 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:48,065 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:48,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,067 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:48,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:48,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:48,107 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=504 (was 422) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7-prefix:jenkins-hbase4.apache.org,43799,1690233041130 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-658668866-172.31.14.131-1690233031453:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:39156 [Receiving block BP-658668866-172.31.14.131-1690233031453:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:44343 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2056145764-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:49346 [Receiving block BP-658668866-172.31.14.131-1690233031453:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-636 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:33418 [Receiving block BP-658668866-172.31.14.131-1690233031453:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-637-acceptor-0@208f8034-ServerConnector@2d7a97c6{HTTP/1.1, (http/1.1)}{0.0.0.0:41501} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-658668866-172.31.14.131-1690233031453:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:49386 [Receiving block BP-658668866-172.31.14.131-1690233031453:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:44343 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3960bf34-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59094@0x74ae934f-SendThread(127.0.0.1:59094) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:33440 [Receiving block BP-658668866-172.31.14.131-1690233031453:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-658668866-172.31.14.131-1690233031453:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-658668866-172.31.14.131-1690233031453:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1134694242_17 at /127.0.0.1:39112 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43799 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:39122 [Receiving block BP-658668866-172.31.14.131-1690233031453:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-658668866-172.31.14.131-1690233031453:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43799-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59094@0x74ae934f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-658668866-172.31.14.131-1690233031453:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1134694242_17 at /127.0.0.1:49406 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7-prefix:jenkins-hbase4.apache.org,43799,1690233041130.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43799Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2056145764-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59094@0x74ae934f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=814 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=429 (was 408) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=6110 (was 6471) 2023-07-24 21:10:48,108 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-24 21:10:48,128 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=429, ProcessCount=177, AvailableMemoryMB=6107 2023-07-24 21:10:48,128 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-24 21:10:48,128 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-24 21:10:48,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:48,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:48,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:48,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:48,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:48,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:48,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:48,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:48,147 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:48,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:48,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:48,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:48,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:48,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234248162, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:48,163 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:48,165 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:48,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,166 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:48,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:48,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:48,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-24 21:10:48,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:60356 deadline: 1690234248168, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 21:10:48,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-24 21:10:48,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:60356 deadline: 1690234248170, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 21:10:48,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-24 21:10:48,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:60356 deadline: 1690234248171, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 21:10:48,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-24 21:10:48,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-24 21:10:48,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:48,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:48,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:48,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:48,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:48,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:48,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:48,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-24 21:10:48,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:10:48,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:48,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:48,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:48,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:48,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:48,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:48,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:48,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:48,233 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:48,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:48,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:48,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:48,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:48,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234248261, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:48,263 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:48,265 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:48,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,267 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:48,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:48,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:48,295 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=429 (was 429), ProcessCount=177 (was 177), AvailableMemoryMB=6100 (was 6107) 2023-07-24 21:10:48,295 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-24 21:10:48,320 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=429, ProcessCount=177, AvailableMemoryMB=6097 2023-07-24 21:10:48,320 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-24 21:10:48,321 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-24 21:10:48,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:48,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:48,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:48,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:48,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:48,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:48,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:48,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:48,342 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:48,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:48,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:48,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:48,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:48,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:48,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234248375, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:48,377 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:48,380 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:48,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,383 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:48,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:48,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:48,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:48,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:48,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-24 21:10:48,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 21:10:48,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:48,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:48,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:48,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:48,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup bar 2023-07-24 21:10:48,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:48,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 21:10:48,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:48,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:48,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(238): Moving server region 27723428b4c241280e87cd60e505360f, which do not belong to RSGroup bar 2023-07-24 21:10:48,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, REOPEN/MOVE 2023-07-24 21:10:48,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 21:10:48,426 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, REOPEN/MOVE 2023-07-24 21:10:48,429 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=27723428b4c241280e87cd60e505360f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:48,429 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233048429"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233048429"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233048429"}]},"ts":"1690233048429"} 2023-07-24 21:10:48,432 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 27723428b4c241280e87cd60e505360f, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:48,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 27723428b4c241280e87cd60e505360f, disabling compactions & flushes 2023-07-24 21:10:48,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. after waiting 0 ms 2023-07-24 21:10:48,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 27723428b4c241280e87cd60e505360f 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-24 21:10:48,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/.tmp/info/9f094b9e7b2b4efd94256de66abd2d48 2023-07-24 21:10:48,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/.tmp/info/9f094b9e7b2b4efd94256de66abd2d48 as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info/9f094b9e7b2b4efd94256de66abd2d48 2023-07-24 21:10:48,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info/9f094b9e7b2b4efd94256de66abd2d48, entries=2, sequenceid=6, filesize=4.8 K 2023-07-24 21:10:48,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 27723428b4c241280e87cd60e505360f in 33ms, sequenceid=6, compaction requested=false 2023-07-24 21:10:48,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-24 21:10:48,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 27723428b4c241280e87cd60e505360f: 2023-07-24 21:10:48,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 27723428b4c241280e87cd60e505360f move to jenkins-hbase4.apache.org,43799,1690233041130 record at close sequenceid=6 2023-07-24 21:10:48,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,631 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=27723428b4c241280e87cd60e505360f, regionState=CLOSED 2023-07-24 21:10:48,631 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233048631"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233048631"}]},"ts":"1690233048631"} 2023-07-24 21:10:48,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-24 21:10:48,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 27723428b4c241280e87cd60e505360f, server=jenkins-hbase4.apache.org,40083,1690233037694 in 201 msec 2023-07-24 21:10:48,635 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:48,786 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=27723428b4c241280e87cd60e505360f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:48,786 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233048786"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233048786"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233048786"}]},"ts":"1690233048786"} 2023-07-24 21:10:48,788 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 27723428b4c241280e87cd60e505360f, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:48,944 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 27723428b4c241280e87cd60e505360f, NAME => 'hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:48,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:48,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,946 INFO [StoreOpener-27723428b4c241280e87cd60e505360f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,947 DEBUG [StoreOpener-27723428b4c241280e87cd60e505360f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info 2023-07-24 21:10:48,947 DEBUG [StoreOpener-27723428b4c241280e87cd60e505360f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info 2023-07-24 21:10:48,947 INFO [StoreOpener-27723428b4c241280e87cd60e505360f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 27723428b4c241280e87cd60e505360f columnFamilyName info 2023-07-24 21:10:48,954 DEBUG [StoreOpener-27723428b4c241280e87cd60e505360f-1] regionserver.HStore(539): loaded hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/info/9f094b9e7b2b4efd94256de66abd2d48 2023-07-24 21:10:48,955 INFO [StoreOpener-27723428b4c241280e87cd60e505360f-1] regionserver.HStore(310): Store=27723428b4c241280e87cd60e505360f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:48,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,957 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,960 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 27723428b4c241280e87cd60e505360f 2023-07-24 21:10:48,961 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 27723428b4c241280e87cd60e505360f; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10935113600, jitterRate=0.01841181516647339}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:48,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 27723428b4c241280e87cd60e505360f: 2023-07-24 21:10:48,962 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f., pid=80, masterSystemTime=1690233048940 2023-07-24 21:10:48,963 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,964 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:10:48,964 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=27723428b4c241280e87cd60e505360f, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:48,964 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233048964"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233048964"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233048964"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233048964"}]},"ts":"1690233048964"} 2023-07-24 21:10:48,968 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-24 21:10:48,968 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 27723428b4c241280e87cd60e505360f, server=jenkins-hbase4.apache.org,43799,1690233041130 in 178 msec 2023-07-24 21:10:48,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=27723428b4c241280e87cd60e505360f, REOPEN/MOVE in 544 msec 2023-07-24 21:10:49,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-24 21:10:49,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533, jenkins-hbase4.apache.org,40083,1690233037694] are moved back to default 2023-07-24 21:10:49,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-24 21:10:49,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:49,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:49,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:49,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 21:10:49,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:49,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:49,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:49,441 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:49,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-24 21:10:49,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 21:10:49,444 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:49,445 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 21:10:49,445 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:49,446 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:49,448 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:49,450 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,451 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a empty. 2023-07-24 21:10:49,452 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,452 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 21:10:49,485 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:49,487 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 65ac7900d2c8f3cc1cfd0b7bddc6340a, NAME => 'Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:49,510 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:49,510 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 65ac7900d2c8f3cc1cfd0b7bddc6340a, disabling compactions & flushes 2023-07-24 21:10:49,510 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,510 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,510 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. after waiting 0 ms 2023-07-24 21:10:49,510 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,510 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,510 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:49,513 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:49,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233049514"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233049514"}]},"ts":"1690233049514"} 2023-07-24 21:10:49,516 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:49,517 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:49,517 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233049517"}]},"ts":"1690233049517"} 2023-07-24 21:10:49,518 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-24 21:10:49,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, ASSIGN}] 2023-07-24 21:10:49,530 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, ASSIGN 2023-07-24 21:10:49,530 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:49,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 21:10:49,682 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:49,683 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233049682"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233049682"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233049682"}]},"ts":"1690233049682"} 2023-07-24 21:10:49,689 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:49,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 21:10:49,846 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65ac7900d2c8f3cc1cfd0b7bddc6340a, NAME => 'Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:49,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:49,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,849 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,851 DEBUG [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f 2023-07-24 21:10:49,851 DEBUG [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f 2023-07-24 21:10:49,851 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65ac7900d2c8f3cc1cfd0b7bddc6340a columnFamilyName f 2023-07-24 21:10:49,852 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] regionserver.HStore(310): Store=65ac7900d2c8f3cc1cfd0b7bddc6340a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:49,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:49,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:49,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 65ac7900d2c8f3cc1cfd0b7bddc6340a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10798314240, jitterRate=0.005671381950378418}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:49,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:49,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a., pid=83, masterSystemTime=1690233049841 2023-07-24 21:10:49,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,867 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:49,867 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:49,868 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233049867"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233049867"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233049867"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233049867"}]},"ts":"1690233049867"} 2023-07-24 21:10:49,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-24 21:10:49,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130 in 180 msec 2023-07-24 21:10:49,876 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-24 21:10:49,876 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, ASSIGN in 344 msec 2023-07-24 21:10:49,877 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:49,877 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233049877"}]},"ts":"1690233049877"} 2023-07-24 21:10:49,888 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-24 21:10:49,892 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:49,894 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 455 msec 2023-07-24 21:10:50,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 21:10:50,048 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-24 21:10:50,049 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-24 21:10:50,049 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:50,054 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-24 21:10:50,055 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:50,055 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-24 21:10:50,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-24 21:10:50,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:50,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 21:10:50,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:50,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:50,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-24 21:10:50,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 65ac7900d2c8f3cc1cfd0b7bddc6340a to RSGroup bar 2023-07-24 21:10:50,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:50,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:50,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:50,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:50,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 21:10:50,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:50,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE 2023-07-24 21:10:50,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-24 21:10:50,066 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE 2023-07-24 21:10:50,067 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:50,068 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233050067"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233050067"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233050067"}]},"ts":"1690233050067"} 2023-07-24 21:10:50,069 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:50,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 65ac7900d2c8f3cc1cfd0b7bddc6340a, disabling compactions & flushes 2023-07-24 21:10:50,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. after waiting 0 ms 2023-07-24 21:10:50,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:50,237 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,237 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:50,237 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 65ac7900d2c8f3cc1cfd0b7bddc6340a move to jenkins-hbase4.apache.org,39543,1690233037533 record at close sequenceid=2 2023-07-24 21:10:50,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,245 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=CLOSED 2023-07-24 21:10:50,245 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233050245"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233050245"}]},"ts":"1690233050245"} 2023-07-24 21:10:50,249 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-24 21:10:50,249 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130 in 178 msec 2023-07-24 21:10:50,251 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:50,401 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:50,402 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:50,402 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233050401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233050401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233050401"}]},"ts":"1690233050401"} 2023-07-24 21:10:50,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:50,561 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65ac7900d2c8f3cc1cfd0b7bddc6340a, NAME => 'Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:50,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:50,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,564 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,565 DEBUG [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f 2023-07-24 21:10:50,565 DEBUG [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f 2023-07-24 21:10:50,565 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65ac7900d2c8f3cc1cfd0b7bddc6340a columnFamilyName f 2023-07-24 21:10:50,566 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] regionserver.HStore(310): Store=65ac7900d2c8f3cc1cfd0b7bddc6340a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:50,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,569 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:50,573 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 65ac7900d2c8f3cc1cfd0b7bddc6340a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10626703520, jitterRate=-0.010311111807823181}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:50,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:50,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a., pid=86, masterSystemTime=1690233050556 2023-07-24 21:10:50,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,577 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:50,578 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:50,578 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233050578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233050578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233050578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233050578"}]},"ts":"1690233050578"} 2023-07-24 21:10:50,582 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-24 21:10:50,582 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,39543,1690233037533 in 176 msec 2023-07-24 21:10:50,584 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE in 518 msec 2023-07-24 21:10:50,728 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 21:10:51,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-24 21:10:51,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-24 21:10:51,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:51,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:51,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:51,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 21:10:51,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:51,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 21:10:51,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:51,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:60356 deadline: 1690234251080, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-24 21:10:51,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:10:51,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:51,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:60356 deadline: 1690234251082, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-24 21:10:51,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-24 21:10:51,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:51,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 21:10:51,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:51,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:51,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-24 21:10:51,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 65ac7900d2c8f3cc1cfd0b7bddc6340a to RSGroup default 2023-07-24 21:10:51,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE 2023-07-24 21:10:51,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 21:10:51,100 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE 2023-07-24 21:10:51,101 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:51,101 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233051101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233051101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233051101"}]},"ts":"1690233051101"} 2023-07-24 21:10:51,107 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:51,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 65ac7900d2c8f3cc1cfd0b7bddc6340a, disabling compactions & flushes 2023-07-24 21:10:51,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. after waiting 0 ms 2023-07-24 21:10:51,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:51,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:51,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 65ac7900d2c8f3cc1cfd0b7bddc6340a move to jenkins-hbase4.apache.org,43799,1690233041130 record at close sequenceid=5 2023-07-24 21:10:51,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,278 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=CLOSED 2023-07-24 21:10:51,278 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233051278"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233051278"}]},"ts":"1690233051278"} 2023-07-24 21:10:51,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-24 21:10:51,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,39543,1690233037533 in 176 msec 2023-07-24 21:10:51,282 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:51,433 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:51,433 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233051432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233051432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233051432"}]},"ts":"1690233051432"} 2023-07-24 21:10:51,435 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:51,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 65ac7900d2c8f3cc1cfd0b7bddc6340a, NAME => 'Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:51,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:51,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,595 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,596 DEBUG [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f 2023-07-24 21:10:51,596 DEBUG [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f 2023-07-24 21:10:51,597 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 65ac7900d2c8f3cc1cfd0b7bddc6340a columnFamilyName f 2023-07-24 21:10:51,598 INFO [StoreOpener-65ac7900d2c8f3cc1cfd0b7bddc6340a-1] regionserver.HStore(310): Store=65ac7900d2c8f3cc1cfd0b7bddc6340a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:51,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:51,606 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 65ac7900d2c8f3cc1cfd0b7bddc6340a; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11605351680, jitterRate=0.0808326005935669}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:51,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:51,608 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a., pid=89, masterSystemTime=1690233051586 2023-07-24 21:10:51,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:51,610 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:51,611 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233051610"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233051610"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233051610"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233051610"}]},"ts":"1690233051610"} 2023-07-24 21:10:51,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-24 21:10:51,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130 in 177 msec 2023-07-24 21:10:51,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, REOPEN/MOVE in 517 msec 2023-07-24 21:10:52,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-24 21:10:52,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-24 21:10:52,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:52,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 21:10:52,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:52,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:60356 deadline: 1690234252109, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-24 21:10:52,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:10:52,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 21:10:52,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:52,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-24 21:10:52,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533, jenkins-hbase4.apache.org,40083,1690233037694] are moved back to bar 2023-07-24 21:10:52,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-24 21:10:52,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:52,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 21:10:52,124 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40083] ipc.CallRunner(144): callId: 214 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:50568 deadline: 1690233112124, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43799 startCode=1690233041130. As of locationSeqNum=6. 2023-07-24 21:10:52,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:10:52,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:52,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,253 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-24 21:10:52,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-24 21:10:52,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 21:10:52,260 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233052260"}]},"ts":"1690233052260"} 2023-07-24 21:10:52,262 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-24 21:10:52,264 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-24 21:10:52,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, UNASSIGN}] 2023-07-24 21:10:52,267 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, UNASSIGN 2023-07-24 21:10:52,267 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:52,268 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233052267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233052267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233052267"}]},"ts":"1690233052267"} 2023-07-24 21:10:52,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:52,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 21:10:52,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:52,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 65ac7900d2c8f3cc1cfd0b7bddc6340a, disabling compactions & flushes 2023-07-24 21:10:52,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:52,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:52,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. after waiting 0 ms 2023-07-24 21:10:52,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:52,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 21:10:52,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a. 2023-07-24 21:10:52,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 65ac7900d2c8f3cc1cfd0b7bddc6340a: 2023-07-24 21:10:52,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:52,447 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=65ac7900d2c8f3cc1cfd0b7bddc6340a, regionState=CLOSED 2023-07-24 21:10:52,447 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690233052447"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233052447"}]},"ts":"1690233052447"} 2023-07-24 21:10:52,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-24 21:10:52,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 65ac7900d2c8f3cc1cfd0b7bddc6340a, server=jenkins-hbase4.apache.org,43799,1690233041130 in 179 msec 2023-07-24 21:10:52,459 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-24 21:10:52,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=65ac7900d2c8f3cc1cfd0b7bddc6340a, UNASSIGN in 185 msec 2023-07-24 21:10:52,460 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233052460"}]},"ts":"1690233052460"} 2023-07-24 21:10:52,462 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-24 21:10:52,465 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-24 21:10:52,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 212 msec 2023-07-24 21:10:52,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 21:10:52,564 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-24 21:10:52,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-24 21:10:52,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,579 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-24 21:10:52,582 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:52,587 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:52,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-24 21:10:52,590 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits] 2023-07-24 21:10:52,599 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits/10.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a/recovered.edits/10.seqid 2023-07-24 21:10:52,600 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testFailRemoveGroup/65ac7900d2c8f3cc1cfd0b7bddc6340a 2023-07-24 21:10:52,600 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 21:10:52,606 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,609 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-24 21:10:52,611 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-24 21:10:52,613 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,613 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-24 21:10:52,613 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233052613"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:52,616 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 21:10:52,616 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 65ac7900d2c8f3cc1cfd0b7bddc6340a, NAME => 'Group_testFailRemoveGroup,,1690233049437.65ac7900d2c8f3cc1cfd0b7bddc6340a.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 21:10:52,616 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-24 21:10:52,616 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233052616"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:52,618 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-24 21:10:52,620 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 21:10:52,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 54 msec 2023-07-24 21:10:52,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-24 21:10:52,690 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-24 21:10:52,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:52,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:52,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:52,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:52,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:52,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:52,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:52,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:52,711 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:52,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:52,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:52,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:52,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:52,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:52,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234252728, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:52,729 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:52,731 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:52,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,732 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:52,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:52,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:52,751 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=514 (was 507) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:39264 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1172574549_17 at /127.0.0.1:33610 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62d0debf-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1172574549_17 at /127.0.0.1:33636 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:43252 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=818 (was 814) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 429), ProcessCount=178 (was 177) - ProcessCount LEAK? -, AvailableMemoryMB=5850 (was 6097) 2023-07-24 21:10:52,752 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 21:10:52,771 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=514, OpenFileDescriptor=818, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=177, AvailableMemoryMB=5849 2023-07-24 21:10:52,771 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 21:10:52,771 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-24 21:10:52,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:52,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:52,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:52,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:52,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:52,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:52,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:52,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:52,789 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:52,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:52,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:52,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:52,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:52,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:52,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234252804, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:52,805 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:52,809 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:52,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,810 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:52,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:52,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:52,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:52,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:52,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:52,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:52,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35829] to rsgroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:52,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 21:10:52,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637] are moved back to default 2023-07-24 21:10:52,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:52,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:52,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:52,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:52,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:52,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:52,844 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:52,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-24 21:10:52,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 21:10:52,847 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:52,847 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:52,848 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:52,848 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:52,853 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:52,855 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:52,856 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 empty. 2023-07-24 21:10:52,856 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:52,857 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 21:10:52,880 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:52,881 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 622573e4a8b1124aae342349dff03a26, NAME => 'GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:52,892 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:52,893 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 622573e4a8b1124aae342349dff03a26, disabling compactions & flushes 2023-07-24 21:10:52,893 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:52,893 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:52,893 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. after waiting 0 ms 2023-07-24 21:10:52,893 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:52,893 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:52,893 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 622573e4a8b1124aae342349dff03a26: 2023-07-24 21:10:52,895 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:52,896 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233052896"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233052896"}]},"ts":"1690233052896"} 2023-07-24 21:10:52,898 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:52,899 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:52,899 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233052899"}]},"ts":"1690233052899"} 2023-07-24 21:10:52,900 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-24 21:10:52,904 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:52,904 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:52,904 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:52,904 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:52,904 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:52,904 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, ASSIGN}] 2023-07-24 21:10:52,906 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, ASSIGN 2023-07-24 21:10:52,906 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:52,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 21:10:53,057 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:53,058 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:53,059 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233053058"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233053058"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233053058"}]},"ts":"1690233053058"} 2023-07-24 21:10:53,064 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:53,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 21:10:53,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:53,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 622573e4a8b1124aae342349dff03a26, NAME => 'GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:53,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:53,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,228 INFO [StoreOpener-622573e4a8b1124aae342349dff03a26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,229 DEBUG [StoreOpener-622573e4a8b1124aae342349dff03a26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/f 2023-07-24 21:10:53,230 DEBUG [StoreOpener-622573e4a8b1124aae342349dff03a26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/f 2023-07-24 21:10:53,230 INFO [StoreOpener-622573e4a8b1124aae342349dff03a26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 622573e4a8b1124aae342349dff03a26 columnFamilyName f 2023-07-24 21:10:53,231 INFO [StoreOpener-622573e4a8b1124aae342349dff03a26-1] regionserver.HStore(310): Store=622573e4a8b1124aae342349dff03a26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:53,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:53,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:53,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 622573e4a8b1124aae342349dff03a26; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11802023680, jitterRate=0.09914910793304443}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:53,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 622573e4a8b1124aae342349dff03a26: 2023-07-24 21:10:53,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26., pid=96, masterSystemTime=1690233053222 2023-07-24 21:10:53,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:53,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:53,240 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:53,240 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233053240"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233053240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233053240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233053240"}]},"ts":"1690233053240"} 2023-07-24 21:10:53,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-24 21:10:53,244 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,43799,1690233041130 in 178 msec 2023-07-24 21:10:53,245 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-24 21:10:53,245 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, ASSIGN in 340 msec 2023-07-24 21:10:53,246 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:53,246 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233053246"}]},"ts":"1690233053246"} 2023-07-24 21:10:53,247 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-24 21:10:53,249 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:53,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 409 msec 2023-07-24 21:10:53,284 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 21:10:53,285 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-24 21:10:53,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 21:10:53,451 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-24 21:10:53,451 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-24 21:10:53,451 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:53,457 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-24 21:10:53,457 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:53,457 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-24 21:10:53,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:53,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:53,463 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:53,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-24 21:10:53,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 21:10:53,466 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:53,467 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:53,471 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:53,471 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:53,474 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:53,476 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:53,476 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea empty. 2023-07-24 21:10:53,477 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:53,477 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 21:10:53,495 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:53,497 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0451fd80253f6138de905095707b3dea, NAME => 'GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:53,525 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:53,525 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 0451fd80253f6138de905095707b3dea, disabling compactions & flushes 2023-07-24 21:10:53,525 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:53,525 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:53,525 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. after waiting 0 ms 2023-07-24 21:10:53,525 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:53,525 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:53,525 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 0451fd80253f6138de905095707b3dea: 2023-07-24 21:10:53,528 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:53,533 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233053532"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233053532"}]},"ts":"1690233053532"} 2023-07-24 21:10:53,534 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:53,535 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:53,535 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233053535"}]},"ts":"1690233053535"} 2023-07-24 21:10:53,536 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-24 21:10:53,541 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:53,541 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:53,541 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:53,541 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:53,541 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:53,541 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, ASSIGN}] 2023-07-24 21:10:53,544 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, ASSIGN 2023-07-24 21:10:53,544 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39543,1690233037533; forceNewPlan=false, retain=false 2023-07-24 21:10:53,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 21:10:53,695 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:53,696 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:53,696 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233053696"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233053696"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233053696"}]},"ts":"1690233053696"} 2023-07-24 21:10:53,698 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:53,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 21:10:53,855 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:53,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0451fd80253f6138de905095707b3dea, NAME => 'GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:53,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:53,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:53,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:53,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:53,857 INFO [StoreOpener-0451fd80253f6138de905095707b3dea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:53,859 DEBUG [StoreOpener-0451fd80253f6138de905095707b3dea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/f 2023-07-24 21:10:53,859 DEBUG [StoreOpener-0451fd80253f6138de905095707b3dea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/f 2023-07-24 21:10:53,860 INFO [StoreOpener-0451fd80253f6138de905095707b3dea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0451fd80253f6138de905095707b3dea columnFamilyName f 2023-07-24 21:10:54,030 INFO [StoreOpener-0451fd80253f6138de905095707b3dea-1] regionserver.HStore(310): Store=0451fd80253f6138de905095707b3dea/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:54,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:54,039 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0451fd80253f6138de905095707b3dea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9891422400, jitterRate=-0.07878950238227844}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:54,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0451fd80253f6138de905095707b3dea: 2023-07-24 21:10:54,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea., pid=99, masterSystemTime=1690233053850 2023-07-24 21:10:54,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,042 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:54,042 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054042"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233054042"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233054042"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233054042"}]},"ts":"1690233054042"} 2023-07-24 21:10:54,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-24 21:10:54,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,39543,1690233037533 in 347 msec 2023-07-24 21:10:54,048 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-24 21:10:54,048 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, ASSIGN in 505 msec 2023-07-24 21:10:54,048 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:54,049 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233054048"}]},"ts":"1690233054048"} 2023-07-24 21:10:54,050 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-24 21:10:54,052 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:54,053 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 593 msec 2023-07-24 21:10:54,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 21:10:54,068 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-24 21:10:54,068 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-24 21:10:54,068 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:54,073 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-24 21:10:54,074 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:54,074 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-24 21:10:54,074 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:54,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 21:10:54,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:54,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 21:10:54,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:54,087 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:54,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:54,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 0451fd80253f6138de905095707b3dea to RSGroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, REOPEN/MOVE 2023-07-24 21:10:54,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 622573e4a8b1124aae342349dff03a26 to RSGroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:54,098 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, REOPEN/MOVE 2023-07-24 21:10:54,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, REOPEN/MOVE 2023-07-24 21:10:54,099 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:10:54,100 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, REOPEN/MOVE 2023-07-24 21:10:54,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1213385114, current retry=0 2023-07-24 21:10:54,100 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233054099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233054099"}]},"ts":"1690233054099"} 2023-07-24 21:10:54,100 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:54,100 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054100"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233054100"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233054100"}]},"ts":"1690233054100"} 2023-07-24 21:10:54,101 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,39543,1690233037533}] 2023-07-24 21:10:54,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:54,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 622573e4a8b1124aae342349dff03a26, disabling compactions & flushes 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0451fd80253f6138de905095707b3dea, disabling compactions & flushes 2023-07-24 21:10:54,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. after waiting 0 ms 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. after waiting 0 ms 2023-07-24 21:10:54,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:54,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:54,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0451fd80253f6138de905095707b3dea: 2023-07-24 21:10:54,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0451fd80253f6138de905095707b3dea move to jenkins-hbase4.apache.org,35829,1690233037637 record at close sequenceid=2 2023-07-24 21:10:54,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 622573e4a8b1124aae342349dff03a26: 2023-07-24 21:10:54,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 622573e4a8b1124aae342349dff03a26 move to jenkins-hbase4.apache.org,35829,1690233037637 record at close sequenceid=2 2023-07-24 21:10:54,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,264 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=CLOSED 2023-07-24 21:10:54,264 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054264"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233054264"}]},"ts":"1690233054264"} 2023-07-24 21:10:54,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,265 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=CLOSED 2023-07-24 21:10:54,265 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054265"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233054265"}]},"ts":"1690233054265"} 2023-07-24 21:10:54,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-24 21:10:54,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,39543,1690233037533 in 165 msec 2023-07-24 21:10:54,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-24 21:10:54,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,43799,1690233041130 in 165 msec 2023-07-24 21:10:54,269 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:54,269 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:54,420 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:54,420 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:54,420 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233054420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233054420"}]},"ts":"1690233054420"} 2023-07-24 21:10:54,420 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233054420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233054420"}]},"ts":"1690233054420"} 2023-07-24 21:10:54,422 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:54,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:54,578 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 622573e4a8b1124aae342349dff03a26, NAME => 'GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:54,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:54,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,580 INFO [StoreOpener-622573e4a8b1124aae342349dff03a26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,581 DEBUG [StoreOpener-622573e4a8b1124aae342349dff03a26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/f 2023-07-24 21:10:54,581 DEBUG [StoreOpener-622573e4a8b1124aae342349dff03a26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/f 2023-07-24 21:10:54,582 INFO [StoreOpener-622573e4a8b1124aae342349dff03a26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 622573e4a8b1124aae342349dff03a26 columnFamilyName f 2023-07-24 21:10:54,582 INFO [StoreOpener-622573e4a8b1124aae342349dff03a26-1] regionserver.HStore(310): Store=622573e4a8b1124aae342349dff03a26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:54,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:54,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 622573e4a8b1124aae342349dff03a26; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11090750240, jitterRate=0.032906606793403625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:54,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 622573e4a8b1124aae342349dff03a26: 2023-07-24 21:10:54,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26., pid=105, masterSystemTime=1690233054574 2023-07-24 21:10:54,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:54,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0451fd80253f6138de905095707b3dea, NAME => 'GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:54,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:54,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,593 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:54,593 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054593"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233054593"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233054593"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233054593"}]},"ts":"1690233054593"} 2023-07-24 21:10:54,595 INFO [StoreOpener-0451fd80253f6138de905095707b3dea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,598 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-24 21:10:54,599 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,35829,1690233037637 in 173 msec 2023-07-24 21:10:54,599 DEBUG [StoreOpener-0451fd80253f6138de905095707b3dea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/f 2023-07-24 21:10:54,599 DEBUG [StoreOpener-0451fd80253f6138de905095707b3dea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/f 2023-07-24 21:10:54,600 INFO [StoreOpener-0451fd80253f6138de905095707b3dea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0451fd80253f6138de905095707b3dea columnFamilyName f 2023-07-24 21:10:54,601 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, REOPEN/MOVE in 501 msec 2023-07-24 21:10:54,610 INFO [StoreOpener-0451fd80253f6138de905095707b3dea-1] regionserver.HStore(310): Store=0451fd80253f6138de905095707b3dea/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:54,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:54,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0451fd80253f6138de905095707b3dea; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10587634720, jitterRate=-0.013949677348136902}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:54,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0451fd80253f6138de905095707b3dea: 2023-07-24 21:10:54,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea., pid=104, masterSystemTime=1690233054574 2023-07-24 21:10:54,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:54,620 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:54,620 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233054620"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233054620"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233054620"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233054620"}]},"ts":"1690233054620"} 2023-07-24 21:10:54,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-24 21:10:54,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,35829,1690233037637 in 200 msec 2023-07-24 21:10:54,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, REOPEN/MOVE in 528 msec 2023-07-24 21:10:55,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-24 21:10:55,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1213385114. 2023-07-24 21:10:55,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:55,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:55,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:55,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:55,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:55,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:55,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:55,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1213385114 2023-07-24 21:10:55,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:55,110 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-24 21:10:55,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-24 21:10:55,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 21:10:55,115 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233055115"}]},"ts":"1690233055115"} 2023-07-24 21:10:55,116 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-24 21:10:55,119 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-24 21:10:55,123 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, UNASSIGN}] 2023-07-24 21:10:55,124 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, UNASSIGN 2023-07-24 21:10:55,125 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:55,125 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233055125"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233055125"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233055125"}]},"ts":"1690233055125"} 2023-07-24 21:10:55,126 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:55,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 21:10:55,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:55,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 622573e4a8b1124aae342349dff03a26, disabling compactions & flushes 2023-07-24 21:10:55,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:55,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:55,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. after waiting 0 ms 2023-07-24 21:10:55,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:55,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:55,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26. 2023-07-24 21:10:55,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 622573e4a8b1124aae342349dff03a26: 2023-07-24 21:10:55,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:55,288 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=622573e4a8b1124aae342349dff03a26, regionState=CLOSED 2023-07-24 21:10:55,288 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233055288"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233055288"}]},"ts":"1690233055288"} 2023-07-24 21:10:55,291 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-24 21:10:55,291 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 622573e4a8b1124aae342349dff03a26, server=jenkins-hbase4.apache.org,35829,1690233037637 in 163 msec 2023-07-24 21:10:55,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-24 21:10:55,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=622573e4a8b1124aae342349dff03a26, UNASSIGN in 171 msec 2023-07-24 21:10:55,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233055293"}]},"ts":"1690233055293"} 2023-07-24 21:10:55,295 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-24 21:10:55,296 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-24 21:10:55,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 186 msec 2023-07-24 21:10:55,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 21:10:55,417 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-24 21:10:55,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-24 21:10:55,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,421 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1213385114' 2023-07-24 21:10:55,422 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:55,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:55,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:55,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:55,426 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:55,428 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/recovered.edits] 2023-07-24 21:10:55,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-24 21:10:55,434 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26/recovered.edits/7.seqid 2023-07-24 21:10:55,435 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveA/622573e4a8b1124aae342349dff03a26 2023-07-24 21:10:55,435 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 21:10:55,437 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,439 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-24 21:10:55,441 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-24 21:10:55,442 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,443 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-24 21:10:55,443 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233055443"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:55,444 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 21:10:55,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 622573e4a8b1124aae342349dff03a26, NAME => 'GrouptestMultiTableMoveA,,1690233052841.622573e4a8b1124aae342349dff03a26.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 21:10:55,444 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-24 21:10:55,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233055444"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:55,446 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-24 21:10:55,447 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 21:10:55,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 29 msec 2023-07-24 21:10:55,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-24 21:10:55,530 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-24 21:10:55,530 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-24 21:10:55,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-24 21:10:55,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 21:10:55,538 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233055538"}]},"ts":"1690233055538"} 2023-07-24 21:10:55,540 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-24 21:10:55,544 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-24 21:10:55,545 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, UNASSIGN}] 2023-07-24 21:10:55,546 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, UNASSIGN 2023-07-24 21:10:55,547 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:55,547 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233055547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233055547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233055547"}]},"ts":"1690233055547"} 2023-07-24 21:10:55,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:55,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 21:10:55,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:55,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0451fd80253f6138de905095707b3dea, disabling compactions & flushes 2023-07-24 21:10:55,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:55,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:55,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. after waiting 0 ms 2023-07-24 21:10:55,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:55,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:55,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea. 2023-07-24 21:10:55,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0451fd80253f6138de905095707b3dea: 2023-07-24 21:10:55,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0451fd80253f6138de905095707b3dea 2023-07-24 21:10:55,714 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=0451fd80253f6138de905095707b3dea, regionState=CLOSED 2023-07-24 21:10:55,714 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690233055714"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233055714"}]},"ts":"1690233055714"} 2023-07-24 21:10:55,718 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-24 21:10:55,718 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 0451fd80253f6138de905095707b3dea, server=jenkins-hbase4.apache.org,35829,1690233037637 in 166 msec 2023-07-24 21:10:55,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-24 21:10:55,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=0451fd80253f6138de905095707b3dea, UNASSIGN in 173 msec 2023-07-24 21:10:55,720 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233055720"}]},"ts":"1690233055720"} 2023-07-24 21:10:55,722 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-24 21:10:55,723 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-24 21:10:55,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 193 msec 2023-07-24 21:10:55,739 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 21:10:55,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 21:10:55,840 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-24 21:10:55,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-24 21:10:55,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,843 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1213385114' 2023-07-24 21:10:55,844 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:55,848 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:55,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:55,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:55,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:55,850 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/recovered.edits] 2023-07-24 21:10:55,856 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/recovered.edits/7.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea/recovered.edits/7.seqid 2023-07-24 21:10:55,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 21:10:55,857 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/GrouptestMultiTableMoveB/0451fd80253f6138de905095707b3dea 2023-07-24 21:10:55,857 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 21:10:55,860 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,863 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-24 21:10:55,865 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-24 21:10:55,867 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,867 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-24 21:10:55,867 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233055867"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:55,868 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 21:10:55,868 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0451fd80253f6138de905095707b3dea, NAME => 'GrouptestMultiTableMoveB,,1690233053459.0451fd80253f6138de905095707b3dea.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 21:10:55,868 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-24 21:10:55,868 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233055868"}]},"ts":"9223372036854775807"} 2023-07-24 21:10:55,870 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-24 21:10:55,873 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 21:10:55,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 32 msec 2023-07-24 21:10:55,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 21:10:55,958 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-24 21:10:55,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:55,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:55,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:55,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:55,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:55,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:10:55,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:55,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1213385114 2023-07-24 21:10:55,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:55,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:55,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1213385114, current retry=0 2023-07-24 21:10:55,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637] are moved back to Group_testMultiTableMove_1213385114 2023-07-24 21:10:55,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1213385114 => default 2023-07-24 21:10:55,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:55,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1213385114 2023-07-24 21:10:55,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:55,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:55,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:10:55,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:55,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:55,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:55,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:55,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:55,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:55,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:55,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:55,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:55,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:56,002 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:56,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:56,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:56,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:56,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234256013, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:56,014 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:56,015 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:56,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,017 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:56,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,040 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511 (was 514), OpenFileDescriptor=793 (was 818), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 409), ProcessCount=177 (was 177), AvailableMemoryMB=5628 (was 5849) 2023-07-24 21:10:56,040 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 21:10:56,061 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=511, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=177, AvailableMemoryMB=5628 2023-07-24 21:10:56,062 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 21:10:56,062 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-24 21:10:56,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:56,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:56,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:56,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:56,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:56,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:56,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:56,083 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:56,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:56,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:56,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:56,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234256094, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:56,095 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:56,097 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:56,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,097 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:56,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-24 21:10:56,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup oldGroup 2023-07-24 21:10:56,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 21:10:56,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to default 2023-07-24 21:10:56,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-24 21:10:56,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 21:10:56,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 21:10:56,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-24 21:10:56,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 21:10:56,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:56,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083] to rsgroup anotherRSGroup 2023-07-24 21:10:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 21:10:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:56,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 21:10:56,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40083,1690233037694] are moved back to default 2023-07-24 21:10:56,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-24 21:10:56,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 21:10:56,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 21:10:56,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-24 21:10:56,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:60356 deadline: 1690234256156, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-24 21:10:56,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-24 21:10:56,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:60356 deadline: 1690234256158, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-24 21:10:56,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-24 21:10:56,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:60356 deadline: 1690234256159, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-24 21:10:56,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-24 21:10:56,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:60356 deadline: 1690234256160, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-24 21:10:56,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:56,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:56,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:56,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083] to rsgroup default 2023-07-24 21:10:56,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 21:10:56,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:56,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-24 21:10:56,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40083,1690233037694] are moved back to anotherRSGroup 2023-07-24 21:10:56,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-24 21:10:56,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-24 21:10:56,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 21:10:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:56,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:56,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:56,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:10:56,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 21:10:56,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-24 21:10:56,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to oldGroup 2023-07-24 21:10:56,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-24 21:10:56,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-24 21:10:56,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:10:56,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:56,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:56,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:56,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:56,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:56,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:56,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:56,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:56,202 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:56,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:56,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:56,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:56,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234256213, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:56,213 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:56,215 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:56,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,216 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:56,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,233 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=515 (was 511) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=793 (was 793), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 409), ProcessCount=177 (was 177), AvailableMemoryMB=5627 (was 5628) 2023-07-24 21:10:56,233 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-24 21:10:56,249 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=515, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=177, AvailableMemoryMB=5626 2023-07-24 21:10:56,250 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-24 21:10:56,250 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-24 21:10:56,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:10:56,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:10:56,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:56,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:10:56,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:10:56,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:10:56,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:10:56,265 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:10:56,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:10:56,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:10:56,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:10:56,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:10:56,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234256278, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:10:56,279 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:10:56,280 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:56,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,281 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:10:56,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:56,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-24 21:10:56,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:56,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:56,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup oldgroup 2023-07-24 21:10:56,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:56,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 21:10:56,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to default 2023-07-24 21:10:56,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-24 21:10:56,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:56,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:56,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:56,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 21:10:56,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:56,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:56,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-24 21:10:56,312 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:56,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-24 21:10:56,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 21:10:56,313 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:56,314 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,314 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,315 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,317 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:56,318 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,319 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 empty. 2023-07-24 21:10:56,319 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,319 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-24 21:10:56,337 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:56,338 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => cd960f0003b43c9f5355f1dc85f68cf3, NAME => 'testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:56,351 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:56,351 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing cd960f0003b43c9f5355f1dc85f68cf3, disabling compactions & flushes 2023-07-24 21:10:56,351 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,351 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,351 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. after waiting 0 ms 2023-07-24 21:10:56,351 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,351 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,351 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:10:56,354 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:56,354 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233056354"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233056354"}]},"ts":"1690233056354"} 2023-07-24 21:10:56,356 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:56,356 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:56,356 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233056356"}]},"ts":"1690233056356"} 2023-07-24 21:10:56,357 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-24 21:10:56,360 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:56,360 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:56,360 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:56,360 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:56,361 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, ASSIGN}] 2023-07-24 21:10:56,362 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, ASSIGN 2023-07-24 21:10:56,363 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:56,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 21:10:56,513 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:56,514 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:56,515 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233056514"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233056514"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233056514"}]},"ts":"1690233056514"} 2023-07-24 21:10:56,516 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:56,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 21:10:56,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd960f0003b43c9f5355f1dc85f68cf3, NAME => 'testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:56,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:56,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,674 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,675 DEBUG [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/tr 2023-07-24 21:10:56,675 DEBUG [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/tr 2023-07-24 21:10:56,676 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd960f0003b43c9f5355f1dc85f68cf3 columnFamilyName tr 2023-07-24 21:10:56,677 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] regionserver.HStore(310): Store=cd960f0003b43c9f5355f1dc85f68cf3/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:56,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:56,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:56,685 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd960f0003b43c9f5355f1dc85f68cf3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11372765440, jitterRate=0.059171319007873535}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:56,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:10:56,686 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3., pid=116, masterSystemTime=1690233056668 2023-07-24 21:10:56,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:56,688 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:56,688 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233056688"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233056688"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233056688"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233056688"}]},"ts":"1690233056688"} 2023-07-24 21:10:56,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-24 21:10:56,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,43799,1690233041130 in 174 msec 2023-07-24 21:10:56,695 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-24 21:10:56,695 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, ASSIGN in 332 msec 2023-07-24 21:10:56,695 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:56,696 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233056696"}]},"ts":"1690233056696"} 2023-07-24 21:10:56,697 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-24 21:10:56,703 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:56,704 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 394 msec 2023-07-24 21:10:56,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 21:10:56,916 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-24 21:10:56,917 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-24 21:10:56,917 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:56,921 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-24 21:10:56,921 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:56,921 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-24 21:10:56,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-24 21:10:56,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:56,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:56,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:56,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:10:56,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-24 21:10:56,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region cd960f0003b43c9f5355f1dc85f68cf3 to RSGroup oldgroup 2023-07-24 21:10:56,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:10:56,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:10:56,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:10:56,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:10:56,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:10:56,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE 2023-07-24 21:10:56,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-24 21:10:56,930 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE 2023-07-24 21:10:56,931 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:56,931 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233056931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233056931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233056931"}]},"ts":"1690233056931"} 2023-07-24 21:10:56,933 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:57,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd960f0003b43c9f5355f1dc85f68cf3, disabling compactions & flushes 2023-07-24 21:10:57,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. after waiting 0 ms 2023-07-24 21:10:57,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:57,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:10:57,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cd960f0003b43c9f5355f1dc85f68cf3 move to jenkins-hbase4.apache.org,35829,1690233037637 record at close sequenceid=2 2023-07-24 21:10:57,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,101 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=CLOSED 2023-07-24 21:10:57,101 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233057101"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233057101"}]},"ts":"1690233057101"} 2023-07-24 21:10:57,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-24 21:10:57,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,43799,1690233041130 in 170 msec 2023-07-24 21:10:57,105 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35829,1690233037637; forceNewPlan=false, retain=false 2023-07-24 21:10:57,255 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:10:57,256 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:57,256 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233057256"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233057256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233057256"}]},"ts":"1690233057256"} 2023-07-24 21:10:57,258 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:10:57,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd960f0003b43c9f5355f1dc85f68cf3, NAME => 'testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:57,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:57,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,416 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,417 DEBUG [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/tr 2023-07-24 21:10:57,417 DEBUG [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/tr 2023-07-24 21:10:57,417 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd960f0003b43c9f5355f1dc85f68cf3 columnFamilyName tr 2023-07-24 21:10:57,418 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] regionserver.HStore(310): Store=cd960f0003b43c9f5355f1dc85f68cf3/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:57,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:10:57,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd960f0003b43c9f5355f1dc85f68cf3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10769961280, jitterRate=0.00303080677986145}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:57,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:10:57,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3., pid=119, masterSystemTime=1690233057410 2023-07-24 21:10:57,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:10:57,427 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:10:57,427 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233057427"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233057427"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233057427"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233057427"}]},"ts":"1690233057427"} 2023-07-24 21:10:57,430 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-24 21:10:57,430 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,35829,1690233037637 in 170 msec 2023-07-24 21:10:57,431 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE in 501 msec 2023-07-24 21:10:57,622 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-24 21:10:57,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-24 21:10:57,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-24 21:10:57,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:57,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:57,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:57,936 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:57,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 21:10:57,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:57,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 21:10:57,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:57,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 21:10:57,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:57,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:10:57,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:57,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-24 21:10:57,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:57,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:10:57,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:57,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:57,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:57,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:10:57,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:57,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:57,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083] to rsgroup normal 2023-07-24 21:10:57,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:57,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:10:57,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:57,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:57,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:57,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 21:10:57,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40083,1690233037694] are moved back to default 2023-07-24 21:10:57,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-24 21:10:57,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:10:57,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:57,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:57,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 21:10:57,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:57,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:10:57,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-24 21:10:57,969 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:10:57,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-24 21:10:57,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 21:10:57,971 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:57,971 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:10:57,971 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:57,972 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:57,972 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:57,974 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:10:57,975 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:57,976 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 empty. 2023-07-24 21:10:57,976 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:57,976 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-24 21:10:57,990 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-24 21:10:57,992 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7b023beb5d50e7f867d5ff60b82fafc2, NAME => 'unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:10:58,002 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:58,002 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 7b023beb5d50e7f867d5ff60b82fafc2, disabling compactions & flushes 2023-07-24 21:10:58,002 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,002 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,002 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. after waiting 0 ms 2023-07-24 21:10:58,003 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,003 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,003 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:10:58,005 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:10:58,005 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233058005"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233058005"}]},"ts":"1690233058005"} 2023-07-24 21:10:58,007 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:10:58,007 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:10:58,007 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233058007"}]},"ts":"1690233058007"} 2023-07-24 21:10:58,008 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-24 21:10:58,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, ASSIGN}] 2023-07-24 21:10:58,019 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, ASSIGN 2023-07-24 21:10:58,019 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:58,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 21:10:58,171 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:58,171 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233058171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233058171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233058171"}]},"ts":"1690233058171"} 2023-07-24 21:10:58,172 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:58,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 21:10:58,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7b023beb5d50e7f867d5ff60b82fafc2, NAME => 'unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:58,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:58,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,329 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,331 DEBUG [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/ut 2023-07-24 21:10:58,331 DEBUG [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/ut 2023-07-24 21:10:58,331 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7b023beb5d50e7f867d5ff60b82fafc2 columnFamilyName ut 2023-07-24 21:10:58,332 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] regionserver.HStore(310): Store=7b023beb5d50e7f867d5ff60b82fafc2/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:58,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:10:58,338 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7b023beb5d50e7f867d5ff60b82fafc2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10407843040, jitterRate=-0.030694082379341125}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:58,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:10:58,338 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2., pid=122, masterSystemTime=1690233058324 2023-07-24 21:10:58,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,340 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:58,340 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233058340"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233058340"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233058340"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233058340"}]},"ts":"1690233058340"} 2023-07-24 21:10:58,343 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-24 21:10:58,343 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,43799,1690233041130 in 169 msec 2023-07-24 21:10:58,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-24 21:10:58,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, ASSIGN in 326 msec 2023-07-24 21:10:58,345 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:10:58,345 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233058345"}]},"ts":"1690233058345"} 2023-07-24 21:10:58,346 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-24 21:10:58,348 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:10:58,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 382 msec 2023-07-24 21:10:58,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 21:10:58,573 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-24 21:10:58,573 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-24 21:10:58,573 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:58,577 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-24 21:10:58,577 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:58,578 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-24 21:10:58,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-24 21:10:58,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 21:10:58,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:10:58,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:58,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:58,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:58,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-24 21:10:58,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 7b023beb5d50e7f867d5ff60b82fafc2 to RSGroup normal 2023-07-24 21:10:58,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE 2023-07-24 21:10:58,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-24 21:10:58,585 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE 2023-07-24 21:10:58,586 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:58,586 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233058586"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233058586"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233058586"}]},"ts":"1690233058586"} 2023-07-24 21:10:58,588 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:10:58,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7b023beb5d50e7f867d5ff60b82fafc2, disabling compactions & flushes 2023-07-24 21:10:58,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. after waiting 0 ms 2023-07-24 21:10:58,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:10:58,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:58,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:10:58,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7b023beb5d50e7f867d5ff60b82fafc2 move to jenkins-hbase4.apache.org,40083,1690233037694 record at close sequenceid=2 2023-07-24 21:10:58,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:58,750 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=CLOSED 2023-07-24 21:10:58,750 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233058749"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233058749"}]},"ts":"1690233058749"} 2023-07-24 21:10:58,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-24 21:10:58,753 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,43799,1690233041130 in 164 msec 2023-07-24 21:10:58,753 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:10:58,904 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:58,904 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233058904"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233058904"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233058904"}]},"ts":"1690233058904"} 2023-07-24 21:10:58,905 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:59,064 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7b023beb5d50e7f867d5ff60b82fafc2, NAME => 'unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:10:59,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:10:59,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,066 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,067 DEBUG [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/ut 2023-07-24 21:10:59,068 DEBUG [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/ut 2023-07-24 21:10:59,068 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7b023beb5d50e7f867d5ff60b82fafc2 columnFamilyName ut 2023-07-24 21:10:59,069 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] regionserver.HStore(310): Store=7b023beb5d50e7f867d5ff60b82fafc2/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:10:59,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7b023beb5d50e7f867d5ff60b82fafc2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11405724000, jitterRate=0.06224082410335541}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:10:59,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:10:59,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2., pid=125, masterSystemTime=1690233059057 2023-07-24 21:10:59,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,079 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,079 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:59,079 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233059079"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233059079"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233059079"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233059079"}]},"ts":"1690233059079"} 2023-07-24 21:10:59,082 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-24 21:10:59,082 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,40083,1690233037694 in 175 msec 2023-07-24 21:10:59,083 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE in 497 msec 2023-07-24 21:10:59,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-24 21:10:59,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-24 21:10:59,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:10:59,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:59,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:59,592 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:10:59,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 21:10:59,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:59,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 21:10:59,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:59,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 21:10:59,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:59,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-24 21:10:59,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:10:59,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:59,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:59,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:10:59,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-24 21:10:59,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-24 21:10:59,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:59,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:59,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-24 21:10:59,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:10:59,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 21:10:59,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:59,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 21:10:59,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:10:59,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:10:59,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:10:59,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-24 21:10:59,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:10:59,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:10:59,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:10:59,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:10:59,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:10:59,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-24 21:10:59,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region 7b023beb5d50e7f867d5ff60b82fafc2 to RSGroup default 2023-07-24 21:10:59,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE 2023-07-24 21:10:59,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 21:10:59,622 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE 2023-07-24 21:10:59,623 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:10:59,623 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233059623"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233059623"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233059623"}]},"ts":"1690233059623"} 2023-07-24 21:10:59,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:10:59,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7b023beb5d50e7f867d5ff60b82fafc2, disabling compactions & flushes 2023-07-24 21:10:59,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. after waiting 0 ms 2023-07-24 21:10:59,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:10:59,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:10:59,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:10:59,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7b023beb5d50e7f867d5ff60b82fafc2 move to jenkins-hbase4.apache.org,43799,1690233041130 record at close sequenceid=5 2023-07-24 21:10:59,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:10:59,785 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=CLOSED 2023-07-24 21:10:59,785 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233059785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233059785"}]},"ts":"1690233059785"} 2023-07-24 21:10:59,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-24 21:10:59,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,40083,1690233037694 in 162 msec 2023-07-24 21:10:59,788 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:10:59,939 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:10:59,939 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233059939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233059939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233059939"}]},"ts":"1690233059939"} 2023-07-24 21:10:59,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:11:00,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:00,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7b023beb5d50e7f867d5ff60b82fafc2, NAME => 'unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:00,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:00,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,097 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,098 DEBUG [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/ut 2023-07-24 21:11:00,098 DEBUG [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/ut 2023-07-24 21:11:00,099 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7b023beb5d50e7f867d5ff60b82fafc2 columnFamilyName ut 2023-07-24 21:11:00,099 INFO [StoreOpener-7b023beb5d50e7f867d5ff60b82fafc2-1] regionserver.HStore(310): Store=7b023beb5d50e7f867d5ff60b82fafc2/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:00,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:00,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7b023beb5d50e7f867d5ff60b82fafc2; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10948556800, jitterRate=0.01966381072998047}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:00,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:11:00,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2., pid=128, masterSystemTime=1690233060092 2023-07-24 21:11:00,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:00,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:00,106 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7b023beb5d50e7f867d5ff60b82fafc2, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:00,106 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690233060106"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233060106"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233060106"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233060106"}]},"ts":"1690233060106"} 2023-07-24 21:11:00,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-24 21:11:00,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 7b023beb5d50e7f867d5ff60b82fafc2, server=jenkins-hbase4.apache.org,43799,1690233041130 in 167 msec 2023-07-24 21:11:00,110 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7b023beb5d50e7f867d5ff60b82fafc2, REOPEN/MOVE in 488 msec 2023-07-24 21:11:00,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-24 21:11:00,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-24 21:11:00,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:00,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40083] to rsgroup default 2023-07-24 21:11:00,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 21:11:00,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:00,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:00,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:11:00,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:11:00,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-24 21:11:00,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40083,1690233037694] are moved back to normal 2023-07-24 21:11:00,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-24 21:11:00,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:00,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-24 21:11:00,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:00,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:00,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:11:00,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 21:11:00,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:00,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:00,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:00,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:00,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:00,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:00,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:00,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:00,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:11:00,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:11:00,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:00,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-24 21:11:00,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:00,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:11:00,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:00,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-24 21:11:00,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(345): Moving region cd960f0003b43c9f5355f1dc85f68cf3 to RSGroup default 2023-07-24 21:11:00,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE 2023-07-24 21:11:00,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 21:11:00,648 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE 2023-07-24 21:11:00,649 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:11:00,649 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233060649"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233060649"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233060649"}]},"ts":"1690233060649"} 2023-07-24 21:11:00,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,35829,1690233037637}] 2023-07-24 21:11:00,788 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 21:11:00,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:00,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd960f0003b43c9f5355f1dc85f68cf3, disabling compactions & flushes 2023-07-24 21:11:00,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:00,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:00,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. after waiting 0 ms 2023-07-24 21:11:00,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:00,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 21:11:00,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:00,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:11:00,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cd960f0003b43c9f5355f1dc85f68cf3 move to jenkins-hbase4.apache.org,40083,1690233037694 record at close sequenceid=5 2023-07-24 21:11:00,834 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:00,834 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=CLOSED 2023-07-24 21:11:00,834 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233060834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233060834"}]},"ts":"1690233060834"} 2023-07-24 21:11:00,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-24 21:11:00,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,35829,1690233037637 in 186 msec 2023-07-24 21:11:00,841 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:11:00,991 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:11:00,991 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:00,992 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233060991"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233060991"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233060991"}]},"ts":"1690233060991"} 2023-07-24 21:11:00,995 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:01,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:01,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd960f0003b43c9f5355f1dc85f68cf3, NAME => 'testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:01,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:01,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,154 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,155 DEBUG [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/tr 2023-07-24 21:11:01,155 DEBUG [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/tr 2023-07-24 21:11:01,156 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd960f0003b43c9f5355f1dc85f68cf3 columnFamilyName tr 2023-07-24 21:11:01,156 INFO [StoreOpener-cd960f0003b43c9f5355f1dc85f68cf3-1] regionserver.HStore(310): Store=cd960f0003b43c9f5355f1dc85f68cf3/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:01,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:01,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd960f0003b43c9f5355f1dc85f68cf3; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12000439680, jitterRate=0.11762803792953491}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:01,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:11:01,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3., pid=131, masterSystemTime=1690233061147 2023-07-24 21:11:01,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:01,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:01,166 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=cd960f0003b43c9f5355f1dc85f68cf3, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:01,166 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690233061166"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233061166"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233061166"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233061166"}]},"ts":"1690233061166"} 2023-07-24 21:11:01,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-24 21:11:01,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure cd960f0003b43c9f5355f1dc85f68cf3, server=jenkins-hbase4.apache.org,40083,1690233037694 in 173 msec 2023-07-24 21:11:01,171 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=cd960f0003b43c9f5355f1dc85f68cf3, REOPEN/MOVE in 522 msec 2023-07-24 21:11:01,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-24 21:11:01,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-24 21:11:01,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:01,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:11:01,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 21:11:01,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:01,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-24 21:11:01,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to newgroup 2023-07-24 21:11:01,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-24 21:11:01,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:01,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-24 21:11:01,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:01,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:01,668 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:01,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:01,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:01,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:01,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:11:01,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234261682, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:11:01,683 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:01,685 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:01,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,686 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:01,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:01,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,709 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=508 (was 515), OpenFileDescriptor=770 (was 793), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=418 (was 409) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=5630 (was 5626) - AvailableMemoryMB LEAK? - 2023-07-24 21:11:01,709 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-24 21:11:01,726 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=418, ProcessCount=177, AvailableMemoryMB=5628 2023-07-24 21:11:01,726 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-24 21:11:01,727 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-24 21:11:01,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:01,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:01,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:01,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:01,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:01,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:01,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:01,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:01,743 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:01,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:01,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:01,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:01,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:11:01,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234261753, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:11:01,754 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:01,756 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:01,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,757 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:01,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:01,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-24 21:11:01,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:11:01,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-24 21:11:01,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-24 21:11:01,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-24 21:11:01,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-24 21:11:01,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:60356 deadline: 1690234261774, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-24 21:11:01,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-24 21:11:01,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:60356 deadline: 1690234261776, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 21:11:01,779 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 21:11:01,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-24 21:11:01,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-24 21:11:01,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:60356 deadline: 1690234261785, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 21:11:01,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:01,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:01,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:01,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:01,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:01,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:01,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:01,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:01,804 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:01,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:01,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:01,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:01,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:11:01,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234261818, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:11:01,822 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:01,823 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:01,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,825 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:01,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:01,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,867 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4ec55d57-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=770 (was 770), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=418 (was 418), ProcessCount=177 (was 177), AvailableMemoryMB=5612 (was 5628) 2023-07-24 21:11:01,867 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 21:11:01,888 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=418, ProcessCount=177, AvailableMemoryMB=5612 2023-07-24 21:11:01,888 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 21:11:01,889 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-24 21:11:01,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:01,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:01,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:01,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:01,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:01,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:01,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:01,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:01,906 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:01,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:01,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:01,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:01,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:11:01,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:01,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234261918, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:11:01,919 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:01,920 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:01,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,921 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:01,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:01,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:01,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:01,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:01,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:01,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 21:11:01,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to default 2023-07-24 21:11:01,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:01,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:01,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:01,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:01,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:01,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:01,955 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:01,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-24 21:11:01,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 21:11:01,957 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:01,957 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_807358786 2023-07-24 21:11:01,958 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:01,958 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:01,960 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b empty. 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 empty. 2023-07-24 21:11:01,965 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc empty. 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae empty. 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 empty. 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:01,966 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:01,967 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 21:11:01,984 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:01,985 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => a55265d696c56332c432ff9870e81286, NAME => 'Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:11:01,986 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f56752d48b5322e803632455c8e896fc, NAME => 'Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:11:01,986 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 11dbb33e7c88078277f27dfee103769b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:11:02,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 11dbb33e7c88078277f27dfee103769b, disabling compactions & flushes 2023-07-24 21:11:02,011 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. after waiting 0 ms 2023-07-24 21:11:02,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,011 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 11dbb33e7c88078277f27dfee103769b: 2023-07-24 21:11:02,012 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 278702a87a5b9ecb1de7d5f7ae1f5122, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:11:02,024 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,024 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f56752d48b5322e803632455c8e896fc, disabling compactions & flushes 2023-07-24 21:11:02,025 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. after waiting 0 ms 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,025 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f56752d48b5322e803632455c8e896fc: 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing a55265d696c56332c432ff9870e81286, disabling compactions & flushes 2023-07-24 21:11:02,025 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => daa2a7fb41ab9fde81e1b12d249bcfae, NAME => 'Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp 2023-07-24 21:11:02,025 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. after waiting 0 ms 2023-07-24 21:11:02,025 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,026 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,026 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for a55265d696c56332c432ff9870e81286: 2023-07-24 21:11:02,031 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,031 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 278702a87a5b9ecb1de7d5f7ae1f5122, disabling compactions & flushes 2023-07-24 21:11:02,031 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,031 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,031 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. after waiting 0 ms 2023-07-24 21:11:02,031 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,031 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,031 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 278702a87a5b9ecb1de7d5f7ae1f5122: 2023-07-24 21:11:02,039 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,039 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing daa2a7fb41ab9fde81e1b12d249bcfae, disabling compactions & flushes 2023-07-24 21:11:02,039 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,039 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,039 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. after waiting 0 ms 2023-07-24 21:11:02,039 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,039 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,039 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for daa2a7fb41ab9fde81e1b12d249bcfae: 2023-07-24 21:11:02,041 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:02,042 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062042"}]},"ts":"1690233062042"} 2023-07-24 21:11:02,042 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062042"}]},"ts":"1690233062042"} 2023-07-24 21:11:02,042 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062042"}]},"ts":"1690233062042"} 2023-07-24 21:11:02,043 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062042"}]},"ts":"1690233062042"} 2023-07-24 21:11:02,043 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062042"}]},"ts":"1690233062042"} 2023-07-24 21:11:02,045 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 21:11:02,045 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:02,046 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233062046"}]},"ts":"1690233062046"} 2023-07-24 21:11:02,047 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-24 21:11:02,051 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:02,051 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:02,051 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:02,051 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:02,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, ASSIGN}] 2023-07-24 21:11:02,053 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, ASSIGN 2023-07-24 21:11:02,053 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, ASSIGN 2023-07-24 21:11:02,053 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, ASSIGN 2023-07-24 21:11:02,054 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, ASSIGN 2023-07-24 21:11:02,054 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, ASSIGN 2023-07-24 21:11:02,054 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:11:02,054 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:11:02,054 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43799,1690233041130; forceNewPlan=false, retain=false 2023-07-24 21:11:02,054 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:11:02,055 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40083,1690233037694; forceNewPlan=false, retain=false 2023-07-24 21:11:02,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 21:11:02,206 INFO [jenkins-hbase4:37361] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 21:11:02,211 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=a55265d696c56332c432ff9870e81286, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,211 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=278702a87a5b9ecb1de7d5f7ae1f5122, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:02,211 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062211"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062211"}]},"ts":"1690233062211"} 2023-07-24 21:11:02,211 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=11dbb33e7c88078277f27dfee103769b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,211 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=daa2a7fb41ab9fde81e1b12d249bcfae, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,211 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f56752d48b5322e803632455c8e896fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:02,211 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062211"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062211"}]},"ts":"1690233062211"} 2023-07-24 21:11:02,211 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062211"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062211"}]},"ts":"1690233062211"} 2023-07-24 21:11:02,211 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062211"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062211"}]},"ts":"1690233062211"} 2023-07-24 21:11:02,211 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062211"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062211"}]},"ts":"1690233062211"} 2023-07-24 21:11:02,213 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=133, state=RUNNABLE; OpenRegionProcedure a55265d696c56332c432ff9870e81286, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:02,213 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=137, state=RUNNABLE; OpenRegionProcedure daa2a7fb41ab9fde81e1b12d249bcfae, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:02,214 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=134, state=RUNNABLE; OpenRegionProcedure f56752d48b5322e803632455c8e896fc, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:11:02,215 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=135, state=RUNNABLE; OpenRegionProcedure 11dbb33e7c88078277f27dfee103769b, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:02,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure 278702a87a5b9ecb1de7d5f7ae1f5122, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:11:02,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 21:11:02,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 278702a87a5b9ecb1de7d5f7ae1f5122, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 21:11:02,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 11dbb33e7c88078277f27dfee103769b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,376 INFO [StoreOpener-278702a87a5b9ecb1de7d5f7ae1f5122-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,377 INFO [StoreOpener-11dbb33e7c88078277f27dfee103769b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,378 DEBUG [StoreOpener-11dbb33e7c88078277f27dfee103769b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/f 2023-07-24 21:11:02,378 DEBUG [StoreOpener-11dbb33e7c88078277f27dfee103769b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/f 2023-07-24 21:11:02,379 INFO [StoreOpener-11dbb33e7c88078277f27dfee103769b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 11dbb33e7c88078277f27dfee103769b columnFamilyName f 2023-07-24 21:11:02,380 INFO [StoreOpener-11dbb33e7c88078277f27dfee103769b-1] regionserver.HStore(310): Store=11dbb33e7c88078277f27dfee103769b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:02,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,383 DEBUG [StoreOpener-278702a87a5b9ecb1de7d5f7ae1f5122-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/f 2023-07-24 21:11:02,384 DEBUG [StoreOpener-278702a87a5b9ecb1de7d5f7ae1f5122-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/f 2023-07-24 21:11:02,384 INFO [StoreOpener-278702a87a5b9ecb1de7d5f7ae1f5122-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 278702a87a5b9ecb1de7d5f7ae1f5122 columnFamilyName f 2023-07-24 21:11:02,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,384 INFO [StoreOpener-278702a87a5b9ecb1de7d5f7ae1f5122-1] regionserver.HStore(310): Store=278702a87a5b9ecb1de7d5f7ae1f5122/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:02,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:02,388 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 11dbb33e7c88078277f27dfee103769b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9925590880, jitterRate=-0.0756073147058487}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:02,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 11dbb33e7c88078277f27dfee103769b: 2023-07-24 21:11:02,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:02,391 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 278702a87a5b9ecb1de7d5f7ae1f5122; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10660469440, jitterRate=-0.007166415452957153}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:02,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 278702a87a5b9ecb1de7d5f7ae1f5122: 2023-07-24 21:11:02,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122., pid=142, masterSystemTime=1690233062370 2023-07-24 21:11:02,393 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b., pid=141, masterSystemTime=1690233062368 2023-07-24 21:11:02,394 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=278702a87a5b9ecb1de7d5f7ae1f5122, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:02,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,394 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062394"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233062394"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233062394"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233062394"}]},"ts":"1690233062394"} 2023-07-24 21:11:02,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f56752d48b5322e803632455c8e896fc, NAME => 'Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 21:11:02,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,395 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=11dbb33e7c88078277f27dfee103769b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,396 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062395"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233062395"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233062395"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233062395"}]},"ts":"1690233062395"} 2023-07-24 21:11:02,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => daa2a7fb41ab9fde81e1b12d249bcfae, NAME => 'Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 21:11:02,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,398 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-24 21:11:02,398 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure 278702a87a5b9ecb1de7d5f7ae1f5122, server=jenkins-hbase4.apache.org,43799,1690233041130 in 180 msec 2023-07-24 21:11:02,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=135 2023-07-24 21:11:02,399 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, ASSIGN in 347 msec 2023-07-24 21:11:02,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=135, state=SUCCESS; OpenRegionProcedure 11dbb33e7c88078277f27dfee103769b, server=jenkins-hbase4.apache.org,40083,1690233037694 in 182 msec 2023-07-24 21:11:02,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, ASSIGN in 348 msec 2023-07-24 21:11:02,402 INFO [StoreOpener-f56752d48b5322e803632455c8e896fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,405 INFO [StoreOpener-daa2a7fb41ab9fde81e1b12d249bcfae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,407 DEBUG [StoreOpener-f56752d48b5322e803632455c8e896fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/f 2023-07-24 21:11:02,407 DEBUG [StoreOpener-daa2a7fb41ab9fde81e1b12d249bcfae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/f 2023-07-24 21:11:02,407 DEBUG [StoreOpener-daa2a7fb41ab9fde81e1b12d249bcfae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/f 2023-07-24 21:11:02,407 DEBUG [StoreOpener-f56752d48b5322e803632455c8e896fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/f 2023-07-24 21:11:02,408 INFO [StoreOpener-daa2a7fb41ab9fde81e1b12d249bcfae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region daa2a7fb41ab9fde81e1b12d249bcfae columnFamilyName f 2023-07-24 21:11:02,408 INFO [StoreOpener-f56752d48b5322e803632455c8e896fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f56752d48b5322e803632455c8e896fc columnFamilyName f 2023-07-24 21:11:02,408 INFO [StoreOpener-daa2a7fb41ab9fde81e1b12d249bcfae-1] regionserver.HStore(310): Store=daa2a7fb41ab9fde81e1b12d249bcfae/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:02,408 INFO [StoreOpener-f56752d48b5322e803632455c8e896fc-1] regionserver.HStore(310): Store=f56752d48b5322e803632455c8e896fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:02,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:02,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:02,431 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f56752d48b5322e803632455c8e896fc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10042861920, jitterRate=-0.06468559801578522}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:02,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f56752d48b5322e803632455c8e896fc: 2023-07-24 21:11:02,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened daa2a7fb41ab9fde81e1b12d249bcfae; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10882588160, jitterRate=0.013520002365112305}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:02,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for daa2a7fb41ab9fde81e1b12d249bcfae: 2023-07-24 21:11:02,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc., pid=140, masterSystemTime=1690233062370 2023-07-24 21:11:02,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae., pid=139, masterSystemTime=1690233062368 2023-07-24 21:11:02,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,435 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f56752d48b5322e803632455c8e896fc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:02,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,435 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062434"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233062434"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233062434"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233062434"}]},"ts":"1690233062434"} 2023-07-24 21:11:02,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a55265d696c56332c432ff9870e81286, NAME => 'Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 21:11:02,435 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=daa2a7fb41ab9fde81e1b12d249bcfae, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,436 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062435"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233062435"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233062435"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233062435"}]},"ts":"1690233062435"} 2023-07-24 21:11:02,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:02,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=134 2023-07-24 21:11:02,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=134, state=SUCCESS; OpenRegionProcedure f56752d48b5322e803632455c8e896fc, server=jenkins-hbase4.apache.org,43799,1690233041130 in 222 msec 2023-07-24 21:11:02,440 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=137 2023-07-24 21:11:02,440 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; OpenRegionProcedure daa2a7fb41ab9fde81e1b12d249bcfae, server=jenkins-hbase4.apache.org,40083,1690233037694 in 224 msec 2023-07-24 21:11:02,440 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, ASSIGN in 388 msec 2023-07-24 21:11:02,441 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, ASSIGN in 389 msec 2023-07-24 21:11:02,445 INFO [StoreOpener-a55265d696c56332c432ff9870e81286-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,449 DEBUG [StoreOpener-a55265d696c56332c432ff9870e81286-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/f 2023-07-24 21:11:02,449 DEBUG [StoreOpener-a55265d696c56332c432ff9870e81286-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/f 2023-07-24 21:11:02,449 INFO [StoreOpener-a55265d696c56332c432ff9870e81286-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a55265d696c56332c432ff9870e81286 columnFamilyName f 2023-07-24 21:11:02,450 INFO [StoreOpener-a55265d696c56332c432ff9870e81286-1] regionserver.HStore(310): Store=a55265d696c56332c432ff9870e81286/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:02,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:02,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a55265d696c56332c432ff9870e81286; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9860863360, jitterRate=-0.08163553476333618}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:02,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a55265d696c56332c432ff9870e81286: 2023-07-24 21:11:02,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286., pid=138, masterSystemTime=1690233062368 2023-07-24 21:11:02,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,460 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=a55265d696c56332c432ff9870e81286, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,460 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062460"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233062460"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233062460"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233062460"}]},"ts":"1690233062460"} 2023-07-24 21:11:02,464 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=133 2023-07-24 21:11:02,464 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; OpenRegionProcedure a55265d696c56332c432ff9870e81286, server=jenkins-hbase4.apache.org,40083,1690233037694 in 249 msec 2023-07-24 21:11:02,465 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-24 21:11:02,466 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, ASSIGN in 413 msec 2023-07-24 21:11:02,466 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:02,466 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233062466"}]},"ts":"1690233062466"} 2023-07-24 21:11:02,467 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-24 21:11:02,469 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:02,470 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 516 msec 2023-07-24 21:11:02,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 21:11:02,559 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-24 21:11:02,560 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-24 21:11:02,560 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:02,564 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-24 21:11:02,565 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:02,565 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-24 21:11:02,565 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:02,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 21:11:02,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:11:02,573 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 21:11:02,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 21:11:02,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 21:11:02,577 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233062577"}]},"ts":"1690233062577"} 2023-07-24 21:11:02,579 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-24 21:11:02,580 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-24 21:11:02,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, UNASSIGN}] 2023-07-24 21:11:02,583 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, UNASSIGN 2023-07-24 21:11:02,583 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, UNASSIGN 2023-07-24 21:11:02,583 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, UNASSIGN 2023-07-24 21:11:02,583 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, UNASSIGN 2023-07-24 21:11:02,583 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, UNASSIGN 2023-07-24 21:11:02,583 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=278702a87a5b9ecb1de7d5f7ae1f5122, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:02,584 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062583"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062583"}]},"ts":"1690233062583"} 2023-07-24 21:11:02,584 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f56752d48b5322e803632455c8e896fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:02,584 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062584"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062584"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062584"}]},"ts":"1690233062584"} 2023-07-24 21:11:02,584 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=11dbb33e7c88078277f27dfee103769b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,584 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062584"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062584"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062584"}]},"ts":"1690233062584"} 2023-07-24 21:11:02,584 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=daa2a7fb41ab9fde81e1b12d249bcfae, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,584 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=a55265d696c56332c432ff9870e81286, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:02,584 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062584"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062584"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062584"}]},"ts":"1690233062584"} 2023-07-24 21:11:02,584 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062584"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233062584"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233062584"}]},"ts":"1690233062584"} 2023-07-24 21:11:02,585 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 278702a87a5b9ecb1de7d5f7ae1f5122, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:11:02,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure f56752d48b5322e803632455c8e896fc, server=jenkins-hbase4.apache.org,43799,1690233041130}] 2023-07-24 21:11:02,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=146, state=RUNNABLE; CloseRegionProcedure 11dbb33e7c88078277f27dfee103769b, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:02,588 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure daa2a7fb41ab9fde81e1b12d249bcfae, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:02,590 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=144, state=RUNNABLE; CloseRegionProcedure a55265d696c56332c432ff9870e81286, server=jenkins-hbase4.apache.org,40083,1690233037694}] 2023-07-24 21:11:02,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 21:11:02,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f56752d48b5322e803632455c8e896fc, disabling compactions & flushes 2023-07-24 21:11:02,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. after waiting 0 ms 2023-07-24 21:11:02,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 11dbb33e7c88078277f27dfee103769b, disabling compactions & flushes 2023-07-24 21:11:02,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. after waiting 0 ms 2023-07-24 21:11:02,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:02,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc. 2023-07-24 21:11:02,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f56752d48b5322e803632455c8e896fc: 2023-07-24 21:11:02,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 278702a87a5b9ecb1de7d5f7ae1f5122, disabling compactions & flushes 2023-07-24 21:11:02,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. after waiting 0 ms 2023-07-24 21:11:02,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:02,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,747 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f56752d48b5322e803632455c8e896fc, regionState=CLOSED 2023-07-24 21:11:02,747 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062747"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062747"}]},"ts":"1690233062747"} 2023-07-24 21:11:02,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b. 2023-07-24 21:11:02,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 11dbb33e7c88078277f27dfee103769b: 2023-07-24 21:11:02,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing daa2a7fb41ab9fde81e1b12d249bcfae, disabling compactions & flushes 2023-07-24 21:11:02,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. after waiting 0 ms 2023-07-24 21:11:02,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,750 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=11dbb33e7c88078277f27dfee103769b, regionState=CLOSED 2023-07-24 21:11:02,750 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062750"}]},"ts":"1690233062750"} 2023-07-24 21:11:02,751 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-24 21:11:02,751 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure f56752d48b5322e803632455c8e896fc, server=jenkins-hbase4.apache.org,43799,1690233041130 in 163 msec 2023-07-24 21:11:02,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f56752d48b5322e803632455c8e896fc, UNASSIGN in 170 msec 2023-07-24 21:11:02,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:02,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-24 21:11:02,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; CloseRegionProcedure 11dbb33e7c88078277f27dfee103769b, server=jenkins-hbase4.apache.org,40083,1690233037694 in 165 msec 2023-07-24 21:11:02,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122. 2023-07-24 21:11:02,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 278702a87a5b9ecb1de7d5f7ae1f5122: 2023-07-24 21:11:02,755 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=11dbb33e7c88078277f27dfee103769b, UNASSIGN in 173 msec 2023-07-24 21:11:02,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,756 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=278702a87a5b9ecb1de7d5f7ae1f5122, regionState=CLOSED 2023-07-24 21:11:02,756 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690233062756"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062756"}]},"ts":"1690233062756"} 2023-07-24 21:11:02,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:02,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae. 2023-07-24 21:11:02,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for daa2a7fb41ab9fde81e1b12d249bcfae: 2023-07-24 21:11:02,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a55265d696c56332c432ff9870e81286, disabling compactions & flushes 2023-07-24 21:11:02,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. after waiting 0 ms 2023-07-24 21:11:02,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,759 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=daa2a7fb41ab9fde81e1b12d249bcfae, regionState=CLOSED 2023-07-24 21:11:02,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-24 21:11:02,759 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062759"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062759"}]},"ts":"1690233062759"} 2023-07-24 21:11:02,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 278702a87a5b9ecb1de7d5f7ae1f5122, server=jenkins-hbase4.apache.org,43799,1690233041130 in 172 msec 2023-07-24 21:11:02,761 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=278702a87a5b9ecb1de7d5f7ae1f5122, UNASSIGN in 178 msec 2023-07-24 21:11:02,762 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-24 21:11:02,762 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure daa2a7fb41ab9fde81e1b12d249bcfae, server=jenkins-hbase4.apache.org,40083,1690233037694 in 173 msec 2023-07-24 21:11:02,764 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=daa2a7fb41ab9fde81e1b12d249bcfae, UNASSIGN in 181 msec 2023-07-24 21:11:02,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:02,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286. 2023-07-24 21:11:02,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a55265d696c56332c432ff9870e81286: 2023-07-24 21:11:02,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,772 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=a55265d696c56332c432ff9870e81286, regionState=CLOSED 2023-07-24 21:11:02,772 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690233062772"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233062772"}]},"ts":"1690233062772"} 2023-07-24 21:11:02,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=144 2023-07-24 21:11:02,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=144, state=SUCCESS; CloseRegionProcedure a55265d696c56332c432ff9870e81286, server=jenkins-hbase4.apache.org,40083,1690233037694 in 183 msec 2023-07-24 21:11:02,775 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-24 21:11:02,775 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a55265d696c56332c432ff9870e81286, UNASSIGN in 193 msec 2023-07-24 21:11:02,776 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233062776"}]},"ts":"1690233062776"} 2023-07-24 21:11:02,777 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-24 21:11:02,779 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-24 21:11:02,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 206 msec 2023-07-24 21:11:02,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 21:11:02,879 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-24 21:11:02,880 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_807358786 2023-07-24 21:11:02,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_807358786 2023-07-24 21:11:02,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:02,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_807358786 2023-07-24 21:11:02,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:02,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:02,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-24 21:11:02,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_807358786, current retry=0 2023-07-24 21:11:02,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_807358786. 2023-07-24 21:11:02,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:02,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:02,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:02,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 21:11:02,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:11:02,895 INFO [Listener at localhost/42247] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 21:11:02,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 21:11:02,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:02,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:60356 deadline: 1690233122895, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-24 21:11:02,896 DEBUG [Listener at localhost/42247] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-24 21:11:02,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-24 21:11:02,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,900 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_807358786' 2023-07-24 21:11:02,901 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:02,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_807358786 2023-07-24 21:11:02,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:02,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:02,908 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,908 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,908 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,908 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,908 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-24 21:11:02,912 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/recovered.edits] 2023-07-24 21:11:02,912 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/recovered.edits] 2023-07-24 21:11:02,912 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/recovered.edits] 2023-07-24 21:11:02,912 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/recovered.edits] 2023-07-24 21:11:02,913 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/f, FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/recovered.edits] 2023-07-24 21:11:02,924 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286/recovered.edits/4.seqid 2023-07-24 21:11:02,924 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae/recovered.edits/4.seqid 2023-07-24 21:11:02,924 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc/recovered.edits/4.seqid 2023-07-24 21:11:02,925 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b/recovered.edits/4.seqid 2023-07-24 21:11:02,925 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/a55265d696c56332c432ff9870e81286 2023-07-24 21:11:02,925 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/daa2a7fb41ab9fde81e1b12d249bcfae 2023-07-24 21:11:02,925 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/f56752d48b5322e803632455c8e896fc 2023-07-24 21:11:02,926 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/11dbb33e7c88078277f27dfee103769b 2023-07-24 21:11:02,926 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/recovered.edits/4.seqid to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/archive/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122/recovered.edits/4.seqid 2023-07-24 21:11:02,927 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/.tmp/data/default/Group_testDisabledTableMove/278702a87a5b9ecb1de7d5f7ae1f5122 2023-07-24 21:11:02,927 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 21:11:02,930 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,932 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-24 21:11:02,937 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-24 21:11:02,938 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,938 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-24 21:11:02,938 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233062938"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:02,938 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233062938"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:02,938 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233062938"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:02,939 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233062938"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:02,939 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233062938"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:02,940 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 21:11:02,940 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a55265d696c56332c432ff9870e81286, NAME => 'Group_testDisabledTableMove,,1690233061952.a55265d696c56332c432ff9870e81286.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f56752d48b5322e803632455c8e896fc, NAME => 'Group_testDisabledTableMove,aaaaa,1690233061952.f56752d48b5322e803632455c8e896fc.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 11dbb33e7c88078277f27dfee103769b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690233061952.11dbb33e7c88078277f27dfee103769b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 278702a87a5b9ecb1de7d5f7ae1f5122, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690233061952.278702a87a5b9ecb1de7d5f7ae1f5122.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => daa2a7fb41ab9fde81e1b12d249bcfae, NAME => 'Group_testDisabledTableMove,zzzzz,1690233061952.daa2a7fb41ab9fde81e1b12d249bcfae.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 21:11:02,940 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-24 21:11:02,941 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233062940"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:02,942 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-24 21:11:02,944 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 21:11:02,945 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 47 msec 2023-07-24 21:11:03,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-24 21:11:03,013 INFO [Listener at localhost/42247] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-24 21:11:03,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:03,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:03,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:03,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:03,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:03,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:35829] to rsgroup default 2023-07-24 21:11:03,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:03,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_807358786 2023-07-24 21:11:03,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:03,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:03,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_807358786, current retry=0 2023-07-24 21:11:03,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35829,1690233037637, jenkins-hbase4.apache.org,39543,1690233037533] are moved back to Group_testDisabledTableMove_807358786 2023-07-24 21:11:03,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_807358786 => default 2023-07-24 21:11:03,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:03,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_807358786 2023-07-24 21:11:03,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:03,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:03,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:11:03,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:03,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:03,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:03,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:03,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:03,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:03,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:03,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:03,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:03,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:03,037 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:03,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:03,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:03,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:03,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:03,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:03,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:03,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:03,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:11:03,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:03,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234263045, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:11:03,046 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:03,048 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:03,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:03,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:03,048 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:03,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:03,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:03,066 INFO [Listener at localhost/42247] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514 (was 512) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_559164390_17 at /127.0.0.1:55588 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1385438533_17 at /127.0.0.1:33636 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62d0debf-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x526d64d3-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=788 (was 770) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=448 (was 418) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=5619 (was 5612) - AvailableMemoryMB LEAK? - 2023-07-24 21:11:03,066 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 21:11:03,082 INFO [Listener at localhost/42247] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=514, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=448, ProcessCount=177, AvailableMemoryMB=5618 2023-07-24 21:11:03,082 WARN [Listener at localhost/42247] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 21:11:03,082 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-24 21:11:03,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:03,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:03,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:03,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:03,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:03,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:03,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:03,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:03,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:03,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:03,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:03,094 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:03,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:03,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:03,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:03,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:03,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:03,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:03,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:03,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37361] to rsgroup master 2023-07-24 21:11:03,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:03,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60356 deadline: 1690234263107, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. 2023-07-24 21:11:03,108 WARN [Listener at localhost/42247] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37361 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:03,109 INFO [Listener at localhost/42247] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:03,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:03,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:03,110 INFO [Listener at localhost/42247] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35829, jenkins-hbase4.apache.org:39543, jenkins-hbase4.apache.org:40083, jenkins-hbase4.apache.org:43799], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:03,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:03,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37361] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:03,111 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 21:11:03,111 INFO [Listener at localhost/42247] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 21:11:03,111 DEBUG [Listener at localhost/42247] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49f725ef to 127.0.0.1:59094 2023-07-24 21:11:03,111 DEBUG [Listener at localhost/42247] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,112 DEBUG [Listener at localhost/42247] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 21:11:03,113 DEBUG [Listener at localhost/42247] util.JVMClusterUtil(257): Found active master hash=2090242173, stopped=false 2023-07-24 21:11:03,113 DEBUG [Listener at localhost/42247] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 21:11:03,113 DEBUG [Listener at localhost/42247] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 21:11:03,113 INFO [Listener at localhost/42247] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:11:03,114 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:03,115 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:03,115 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:03,115 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:03,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:03,115 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:03,115 INFO [Listener at localhost/42247] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 21:11:03,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:03,115 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:03,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:03,116 DEBUG [Listener at localhost/42247] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x47902e6c to 127.0.0.1:59094 2023-07-24 21:11:03,116 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:03,116 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:03,116 DEBUG [Listener at localhost/42247] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,117 INFO [Listener at localhost/42247] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39543,1690233037533' ***** 2023-07-24 21:11:03,117 INFO [Listener at localhost/42247] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:03,117 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:03,117 INFO [Listener at localhost/42247] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35829,1690233037637' ***** 2023-07-24 21:11:03,118 INFO [Listener at localhost/42247] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:03,118 INFO [Listener at localhost/42247] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40083,1690233037694' ***** 2023-07-24 21:11:03,119 INFO [Listener at localhost/42247] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:03,118 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:03,122 INFO [Listener at localhost/42247] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43799,1690233041130' ***** 2023-07-24 21:11:03,122 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:03,122 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1064): Closing user regions 2023-07-24 21:11:03,123 INFO [Listener at localhost/42247] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:03,125 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(3305): Received CLOSE for 0aa6c5b31ae7fded5577dadecfbf135f 2023-07-24 21:11:03,131 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(3305): Received CLOSE for 27723428b4c241280e87cd60e505360f 2023-07-24 21:11:03,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0aa6c5b31ae7fded5577dadecfbf135f, disabling compactions & flushes 2023-07-24 21:11:03,131 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(3305): Received CLOSE for 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:03,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:11:03,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:11:03,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. after waiting 0 ms 2023-07-24 21:11:03,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:11:03,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0aa6c5b31ae7fded5577dadecfbf135f 1/1 column families, dataSize=27.07 KB heapSize=44.69 KB 2023-07-24 21:11:03,132 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:03,136 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,136 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,138 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,141 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:03,141 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:03,141 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,141 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:03,141 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:03,143 INFO [RS:2;jenkins-hbase4:40083] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@273cd09d{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:03,143 INFO [RS:0;jenkins-hbase4:39543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e477f98{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:03,143 INFO [RS:3;jenkins-hbase4:43799] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4b822857{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:03,143 INFO [RS:1;jenkins-hbase4:35829] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1ca57311{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:03,148 INFO [RS:3;jenkins-hbase4:43799] server.AbstractConnector(383): Stopped ServerConnector@2d7a97c6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:03,148 INFO [RS:2;jenkins-hbase4:40083] server.AbstractConnector(383): Stopped ServerConnector@25fe622e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:03,148 INFO [RS:3;jenkins-hbase4:43799] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:03,148 INFO [RS:1;jenkins-hbase4:35829] server.AbstractConnector(383): Stopped ServerConnector@b5441b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:03,148 INFO [RS:0;jenkins-hbase4:39543] server.AbstractConnector(383): Stopped ServerConnector@1f3e1e33{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:03,149 INFO [RS:1;jenkins-hbase4:35829] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:03,149 INFO [RS:0;jenkins-hbase4:39543] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:03,149 INFO [RS:3;jenkins-hbase4:43799] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@d4406e7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:03,148 INFO [RS:2;jenkins-hbase4:40083] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:03,150 INFO [RS:1;jenkins-hbase4:35829] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f034a4d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:03,150 INFO [RS:0;jenkins-hbase4:39543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@43eef2eb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:03,150 INFO [RS:3;jenkins-hbase4:43799] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@161302d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:03,152 INFO [RS:1;jenkins-hbase4:35829] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f564b48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:03,151 INFO [RS:2;jenkins-hbase4:40083] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41aab68c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:03,152 INFO [RS:0;jenkins-hbase4:39543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@58393e7a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:03,153 INFO [RS:2;jenkins-hbase4:40083] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e754d6b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:03,156 INFO [RS:1;jenkins-hbase4:35829] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:03,156 INFO [RS:2;jenkins-hbase4:40083] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:03,156 INFO [RS:1;jenkins-hbase4:35829] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:03,156 INFO [RS:2;jenkins-hbase4:40083] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:03,156 INFO [RS:0;jenkins-hbase4:39543] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:03,156 INFO [RS:3;jenkins-hbase4:43799] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:03,156 INFO [RS:0;jenkins-hbase4:39543] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:03,156 INFO [RS:0;jenkins-hbase4:39543] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:03,156 INFO [RS:2;jenkins-hbase4:40083] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:03,156 INFO [RS:1;jenkins-hbase4:35829] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:03,157 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:11:03,157 DEBUG [RS:1;jenkins-hbase4:35829] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4e64ade4 to 127.0.0.1:59094 2023-07-24 21:11:03,157 DEBUG [RS:1;jenkins-hbase4:35829] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,157 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35829,1690233037637; all regions closed. 2023-07-24 21:11:03,156 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(3305): Received CLOSE for cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:03,156 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:11:03,158 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:03,156 INFO [RS:3;jenkins-hbase4:43799] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:03,158 DEBUG [RS:2;jenkins-hbase4:40083] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7996ac9f to 127.0.0.1:59094 2023-07-24 21:11:03,158 DEBUG [RS:0;jenkins-hbase4:39543] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2cb19361 to 127.0.0.1:59094 2023-07-24 21:11:03,158 DEBUG [RS:2;jenkins-hbase4:40083] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,158 INFO [RS:3;jenkins-hbase4:43799] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:03,158 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 21:11:03,158 DEBUG [RS:0;jenkins-hbase4:39543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,158 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1478): Online Regions={cd960f0003b43c9f5355f1dc85f68cf3=testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3.} 2023-07-24 21:11:03,158 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(3307): Received CLOSE for the region: 27723428b4c241280e87cd60e505360f, which we are already trying to CLOSE, but not completed yet 2023-07-24 21:11:03,158 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39543,1690233037533; all regions closed. 2023-07-24 21:11:03,158 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(3307): Received CLOSE for the region: 7b023beb5d50e7f867d5ff60b82fafc2, which we are already trying to CLOSE, but not completed yet 2023-07-24 21:11:03,159 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:03,159 DEBUG [RS:3;jenkins-hbase4:43799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x74ae934f to 127.0.0.1:59094 2023-07-24 21:11:03,159 DEBUG [RS:3;jenkins-hbase4:43799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,159 INFO [RS:3;jenkins-hbase4:43799] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:03,159 INFO [RS:3;jenkins-hbase4:43799] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:03,159 INFO [RS:3;jenkins-hbase4:43799] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:03,159 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 21:11:03,159 DEBUG [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1504): Waiting on cd960f0003b43c9f5355f1dc85f68cf3 2023-07-24 21:11:03,164 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 21:11:03,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd960f0003b43c9f5355f1dc85f68cf3, disabling compactions & flushes 2023-07-24 21:11:03,166 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:11:03,166 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 0aa6c5b31ae7fded5577dadecfbf135f=hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f., 27723428b4c241280e87cd60e505360f=hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f., 7b023beb5d50e7f867d5ff60b82fafc2=unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2.} 2023-07-24 21:11:03,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:11:03,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:03,166 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1504): Waiting on 0aa6c5b31ae7fded5577dadecfbf135f, 1588230740, 27723428b4c241280e87cd60e505360f, 7b023beb5d50e7f867d5ff60b82fafc2 2023-07-24 21:11:03,166 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:11:03,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:03,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. after waiting 0 ms 2023-07-24 21:11:03,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:03,166 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:11:03,167 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:11:03,167 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.59 KB heapSize=120.50 KB 2023-07-24 21:11:03,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.07 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/.tmp/m/892ca0ab3729443aa0e3c56a6346dfe2 2023-07-24 21:11:03,195 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/testRename/cd960f0003b43c9f5355f1dc85f68cf3/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 21:11:03,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:03,195 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd960f0003b43c9f5355f1dc85f68cf3: 2023-07-24 21:11:03,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690233056309.cd960f0003b43c9f5355f1dc85f68cf3. 2023-07-24 21:11:03,200 DEBUG [RS:1;jenkins-hbase4:35829] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs 2023-07-24 21:11:03,200 INFO [RS:1;jenkins-hbase4:35829] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35829%2C1690233037637:(num 1690233039591) 2023-07-24 21:11:03,200 DEBUG [RS:1;jenkins-hbase4:35829] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,200 INFO [RS:1;jenkins-hbase4:35829] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,204 INFO [RS:1;jenkins-hbase4:35829] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:03,204 DEBUG [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs 2023-07-24 21:11:03,205 INFO [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39543%2C1690233037533.meta:.meta(num 1690233039856) 2023-07-24 21:11:03,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 892ca0ab3729443aa0e3c56a6346dfe2 2023-07-24 21:11:03,207 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/.tmp/m/892ca0ab3729443aa0e3c56a6346dfe2 as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m/892ca0ab3729443aa0e3c56a6346dfe2 2023-07-24 21:11:03,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 892ca0ab3729443aa0e3c56a6346dfe2 2023-07-24 21:11:03,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/m/892ca0ab3729443aa0e3c56a6346dfe2, entries=28, sequenceid=101, filesize=6.1 K 2023-07-24 21:11:03,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.07 KB/27718, heapSize ~44.67 KB/45744, currentSize=0 B/0 for 0aa6c5b31ae7fded5577dadecfbf135f in 83ms, sequenceid=101, compaction requested=false 2023-07-24 21:11:03,214 INFO [RS:1;jenkins-hbase4:35829] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:03,214 INFO [RS:1;jenkins-hbase4:35829] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:03,215 INFO [RS:1;jenkins-hbase4:35829] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:03,215 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:03,225 INFO [RS:1;jenkins-hbase4:35829] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35829 2023-07-24 21:11:03,232 DEBUG [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs 2023-07-24 21:11:03,232 INFO [RS:0;jenkins-hbase4:39543] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39543%2C1690233037533:(num 1690233039591) 2023-07-24 21:11:03,233 DEBUG [RS:0;jenkins-hbase4:39543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,233 INFO [RS:0;jenkins-hbase4:39543] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35829,1690233037637 2023-07-24 21:11:03,235 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,236 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35829,1690233037637] 2023-07-24 21:11:03,236 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35829,1690233037637; numProcessing=1 2023-07-24 21:11:03,238 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35829,1690233037637 already deleted, retry=false 2023-07-24 21:11:03,238 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35829,1690233037637 expired; onlineServers=3 2023-07-24 21:11:03,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/rsgroup/0aa6c5b31ae7fded5577dadecfbf135f/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-24 21:11:03,241 INFO [RS:0;jenkins-hbase4:39543] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:03,242 INFO [RS:0;jenkins-hbase4:39543] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:03,242 INFO [RS:0;jenkins-hbase4:39543] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:03,243 INFO [RS:0;jenkins-hbase4:39543] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:03,243 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:03,244 INFO [RS:0;jenkins-hbase4:39543] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39543 2023-07-24 21:11:03,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:03,246 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:11:03,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0aa6c5b31ae7fded5577dadecfbf135f: 2023-07-24 21:11:03,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690233040270.0aa6c5b31ae7fded5577dadecfbf135f. 2023-07-24 21:11:03,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 27723428b4c241280e87cd60e505360f, disabling compactions & flushes 2023-07-24 21:11:03,246 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:11:03,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:11:03,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. after waiting 0 ms 2023-07-24 21:11:03,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:11:03,259 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:11:03,259 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:11:03,259 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39543,1690233037533 2023-07-24 21:11:03,259 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,260 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39543,1690233037533] 2023-07-24 21:11:03,260 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39543,1690233037533; numProcessing=2 2023-07-24 21:11:03,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/namespace/27723428b4c241280e87cd60e505360f/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-24 21:11:03,263 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39543,1690233037533 already deleted, retry=false 2023-07-24 21:11:03,263 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39543,1690233037533 expired; onlineServers=2 2023-07-24 21:11:03,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:11:03,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 27723428b4c241280e87cd60e505360f: 2023-07-24 21:11:03,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690233040145.27723428b4c241280e87cd60e505360f. 2023-07-24 21:11:03,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7b023beb5d50e7f867d5ff60b82fafc2, disabling compactions & flushes 2023-07-24 21:11:03,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:03,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:03,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. after waiting 0 ms 2023-07-24 21:11:03,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:03,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/default/unmovedTable/7b023beb5d50e7f867d5ff60b82fafc2/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 21:11:03,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:03,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7b023beb5d50e7f867d5ff60b82fafc2: 2023-07-24 21:11:03,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690233057966.7b023beb5d50e7f867d5ff60b82fafc2. 2023-07-24 21:11:03,312 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 21:11:03,312 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 21:11:03,359 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40083,1690233037694; all regions closed. 2023-07-24 21:11:03,365 DEBUG [RS:2;jenkins-hbase4:40083] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs 2023-07-24 21:11:03,365 INFO [RS:2;jenkins-hbase4:40083] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40083%2C1690233037694:(num 1690233039590) 2023-07-24 21:11:03,365 DEBUG [RS:2;jenkins-hbase4:40083] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,365 INFO [RS:2;jenkins-hbase4:40083] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,365 INFO [RS:2;jenkins-hbase4:40083] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:03,366 INFO [RS:2;jenkins-hbase4:40083] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:03,366 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:03,366 INFO [RS:2;jenkins-hbase4:40083] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:03,366 INFO [RS:2;jenkins-hbase4:40083] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:03,367 INFO [RS:2;jenkins-hbase4:40083] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40083 2023-07-24 21:11:03,367 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 21:11:03,368 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:03,368 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,368 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40083,1690233037694 2023-07-24 21:11:03,369 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40083,1690233037694] 2023-07-24 21:11:03,370 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40083,1690233037694; numProcessing=3 2023-07-24 21:11:03,371 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40083,1690233037694 already deleted, retry=false 2023-07-24 21:11:03,371 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40083,1690233037694 expired; onlineServers=1 2023-07-24 21:11:03,567 DEBUG [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 21:11:03,632 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.78 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/info/21c17f3acd9d4c5da29ea17a0a3cfdb3 2023-07-24 21:11:03,638 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 21c17f3acd9d4c5da29ea17a0a3cfdb3 2023-07-24 21:11:03,648 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/rep_barrier/0eaa71da037e40ef93f002193ec48042 2023-07-24 21:11:03,654 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0eaa71da037e40ef93f002193ec48042 2023-07-24 21:11:03,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/table/073e418dd2be4756a2c4a09cceef461e 2023-07-24 21:11:03,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 073e418dd2be4756a2c4a09cceef461e 2023-07-24 21:11:03,679 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/info/21c17f3acd9d4c5da29ea17a0a3cfdb3 as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info/21c17f3acd9d4c5da29ea17a0a3cfdb3 2023-07-24 21:11:03,685 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 21c17f3acd9d4c5da29ea17a0a3cfdb3 2023-07-24 21:11:03,685 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/info/21c17f3acd9d4c5da29ea17a0a3cfdb3, entries=93, sequenceid=210, filesize=15.5 K 2023-07-24 21:11:03,686 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/rep_barrier/0eaa71da037e40ef93f002193ec48042 as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier/0eaa71da037e40ef93f002193ec48042 2023-07-24 21:11:03,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0eaa71da037e40ef93f002193ec48042 2023-07-24 21:11:03,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/rep_barrier/0eaa71da037e40ef93f002193ec48042, entries=18, sequenceid=210, filesize=6.9 K 2023-07-24 21:11:03,692 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/.tmp/table/073e418dd2be4756a2c4a09cceef461e as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table/073e418dd2be4756a2c4a09cceef461e 2023-07-24 21:11:03,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 073e418dd2be4756a2c4a09cceef461e 2023-07-24 21:11:03,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/table/073e418dd2be4756a2c4a09cceef461e, entries=27, sequenceid=210, filesize=7.2 K 2023-07-24 21:11:03,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.59 KB/78427, heapSize ~120.45 KB/123344, currentSize=0 B/0 for 1588230740 in 531ms, sequenceid=210, compaction requested=false 2023-07-24 21:11:03,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=18 2023-07-24 21:11:03,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:03,709 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:03,709 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:11:03,709 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:03,767 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43799,1690233041130; all regions closed. 2023-07-24 21:11:03,773 DEBUG [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs 2023-07-24 21:11:03,773 INFO [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43799%2C1690233041130.meta:.meta(num 1690233042283) 2023-07-24 21:11:03,779 DEBUG [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/oldWALs 2023-07-24 21:11:03,779 INFO [RS:3;jenkins-hbase4:43799] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43799%2C1690233041130:(num 1690233041443) 2023-07-24 21:11:03,779 DEBUG [RS:3;jenkins-hbase4:43799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,779 INFO [RS:3;jenkins-hbase4:43799] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:03,779 INFO [RS:3;jenkins-hbase4:43799] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:03,779 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:03,780 INFO [RS:3;jenkins-hbase4:43799] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43799 2023-07-24 21:11:03,783 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43799,1690233041130 2023-07-24 21:11:03,783 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:03,785 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43799,1690233041130] 2023-07-24 21:11:03,785 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43799,1690233041130; numProcessing=4 2023-07-24 21:11:03,786 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43799,1690233041130 already deleted, retry=false 2023-07-24 21:11:03,786 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43799,1690233041130 expired; onlineServers=0 2023-07-24 21:11:03,786 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37361,1690233035466' ***** 2023-07-24 21:11:03,786 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 21:11:03,787 DEBUG [M:0;jenkins-hbase4:37361] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3dab9a46, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:03,787 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:03,789 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:03,789 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:03,789 INFO [M:0;jenkins-hbase4:37361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7d3907af{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 21:11:03,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:03,789 INFO [M:0;jenkins-hbase4:37361] server.AbstractConnector(383): Stopped ServerConnector@3b87408a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:03,789 INFO [M:0;jenkins-hbase4:37361] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:03,790 INFO [M:0;jenkins-hbase4:37361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@352e26e6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:03,791 INFO [M:0;jenkins-hbase4:37361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1fef8f64{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:03,791 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37361,1690233035466 2023-07-24 21:11:03,791 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37361,1690233035466; all regions closed. 2023-07-24 21:11:03,791 DEBUG [M:0;jenkins-hbase4:37361] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:03,791 INFO [M:0;jenkins-hbase4:37361] master.HMaster(1491): Stopping master jetty server 2023-07-24 21:11:03,792 INFO [M:0;jenkins-hbase4:37361] server.AbstractConnector(383): Stopped ServerConnector@5253f0fe{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:03,792 DEBUG [M:0;jenkins-hbase4:37361] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 21:11:03,792 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 21:11:03,793 DEBUG [M:0;jenkins-hbase4:37361] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 21:11:03,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233039122] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233039122,5,FailOnTimeoutGroup] 2023-07-24 21:11:03,793 INFO [M:0;jenkins-hbase4:37361] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 21:11:03,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233039121] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233039121,5,FailOnTimeoutGroup] 2023-07-24 21:11:03,793 INFO [M:0;jenkins-hbase4:37361] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 21:11:03,793 INFO [M:0;jenkins-hbase4:37361] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 21:11:03,793 DEBUG [M:0;jenkins-hbase4:37361] master.HMaster(1512): Stopping service threads 2023-07-24 21:11:03,793 INFO [M:0;jenkins-hbase4:37361] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 21:11:03,793 ERROR [M:0;jenkins-hbase4:37361] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 21:11:03,794 INFO [M:0;jenkins-hbase4:37361] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 21:11:03,794 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 21:11:03,795 DEBUG [M:0;jenkins-hbase4:37361] zookeeper.ZKUtil(398): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 21:11:03,795 WARN [M:0;jenkins-hbase4:37361] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 21:11:03,795 INFO [M:0;jenkins-hbase4:37361] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 21:11:03,795 INFO [M:0;jenkins-hbase4:37361] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 21:11:03,795 DEBUG [M:0;jenkins-hbase4:37361] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 21:11:03,795 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:03,795 DEBUG [M:0;jenkins-hbase4:37361] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:03,795 DEBUG [M:0;jenkins-hbase4:37361] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 21:11:03,795 DEBUG [M:0;jenkins-hbase4:37361] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:03,795 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.20 KB heapSize=621.32 KB 2023-07-24 21:11:03,814 INFO [M:0;jenkins-hbase4:37361] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.20 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8ca18a3f4ca34ed3b42dfb46ce2230ed 2023-07-24 21:11:03,815 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:03,815 INFO [RS:2;jenkins-hbase4:40083] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40083,1690233037694; zookeeper connection closed. 2023-07-24 21:11:03,815 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:40083-0x101992bd9f80003, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:03,816 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@e77d29] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@e77d29 2023-07-24 21:11:03,819 DEBUG [M:0;jenkins-hbase4:37361] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8ca18a3f4ca34ed3b42dfb46ce2230ed as hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8ca18a3f4ca34ed3b42dfb46ce2230ed 2023-07-24 21:11:03,824 INFO [M:0;jenkins-hbase4:37361] regionserver.HStore(1080): Added hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8ca18a3f4ca34ed3b42dfb46ce2230ed, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-24 21:11:03,825 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegion(2948): Finished flush of dataSize ~519.20 KB/531657, heapSize ~621.30 KB/636216, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=1152, compaction requested=false 2023-07-24 21:11:03,827 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:03,827 DEBUG [M:0;jenkins-hbase4:37361] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:03,830 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:03,830 INFO [M:0;jenkins-hbase4:37361] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 21:11:03,831 INFO [M:0;jenkins-hbase4:37361] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37361 2023-07-24 21:11:03,834 DEBUG [M:0;jenkins-hbase4:37361] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37361,1690233035466 already deleted, retry=false 2023-07-24 21:11:03,915 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:03,916 INFO [RS:0;jenkins-hbase4:39543] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39543,1690233037533; zookeeper connection closed. 2023-07-24 21:11:03,916 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:39543-0x101992bd9f80001, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:03,916 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@11781e1e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@11781e1e 2023-07-24 21:11:04,016 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:04,016 INFO [RS:1;jenkins-hbase4:35829] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35829,1690233037637; zookeeper connection closed. 2023-07-24 21:11:04,016 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:35829-0x101992bd9f80002, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:04,016 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3d326209] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3d326209 2023-07-24 21:11:04,116 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:04,116 INFO [M:0;jenkins-hbase4:37361] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37361,1690233035466; zookeeper connection closed. 2023-07-24 21:11:04,116 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): master:37361-0x101992bd9f80000, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:04,216 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:04,216 INFO [RS:3;jenkins-hbase4:43799] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43799,1690233041130; zookeeper connection closed. 2023-07-24 21:11:04,216 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x101992bd9f8000b, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:04,217 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1e50410e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1e50410e 2023-07-24 21:11:04,217 INFO [Listener at localhost/42247] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 21:11:04,217 WARN [Listener at localhost/42247] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:04,221 INFO [Listener at localhost/42247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:04,325 WARN [BP-658668866-172.31.14.131-1690233031453 heartbeating to localhost/127.0.0.1:44343] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:04,325 WARN [BP-658668866-172.31.14.131-1690233031453 heartbeating to localhost/127.0.0.1:44343] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-658668866-172.31.14.131-1690233031453 (Datanode Uuid 59e611f2-095b-4fe6-b4e9-83977eff25c7) service to localhost/127.0.0.1:44343 2023-07-24 21:11:04,327 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data5/current/BP-658668866-172.31.14.131-1690233031453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:04,327 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data6/current/BP-658668866-172.31.14.131-1690233031453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:04,329 WARN [Listener at localhost/42247] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:04,333 INFO [Listener at localhost/42247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:04,436 WARN [BP-658668866-172.31.14.131-1690233031453 heartbeating to localhost/127.0.0.1:44343] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:04,436 WARN [BP-658668866-172.31.14.131-1690233031453 heartbeating to localhost/127.0.0.1:44343] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-658668866-172.31.14.131-1690233031453 (Datanode Uuid 0f426e30-868f-4b8e-bbc4-a8d11e6da9c3) service to localhost/127.0.0.1:44343 2023-07-24 21:11:04,437 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data3/current/BP-658668866-172.31.14.131-1690233031453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:04,437 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data4/current/BP-658668866-172.31.14.131-1690233031453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:04,439 WARN [Listener at localhost/42247] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:04,444 INFO [Listener at localhost/42247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:04,547 WARN [BP-658668866-172.31.14.131-1690233031453 heartbeating to localhost/127.0.0.1:44343] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:04,547 WARN [BP-658668866-172.31.14.131-1690233031453 heartbeating to localhost/127.0.0.1:44343] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-658668866-172.31.14.131-1690233031453 (Datanode Uuid 3fedd356-1388-40ee-8b14-2235f4bcff53) service to localhost/127.0.0.1:44343 2023-07-24 21:11:04,548 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data1/current/BP-658668866-172.31.14.131-1690233031453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:04,548 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/cluster_f0bf26cf-b78a-a726-83d0-51f842c523e1/dfs/data/data2/current/BP-658668866-172.31.14.131-1690233031453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:04,577 INFO [Listener at localhost/42247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:04,696 INFO [Listener at localhost/42247] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 21:11:04,745 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 21:11:04,745 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 21:11:04,745 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.log.dir so I do NOT create it in target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46 2023-07-24 21:11:04,745 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/23f16da4-a6ea-536d-6f0c-3eb6019f335f/hadoop.tmp.dir so I do NOT create it in target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7, deleteOnExit=true 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/test.cache.data in system properties and HBase conf 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir in system properties and HBase conf 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 21:11:04,746 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 21:11:04,746 DEBUG [Listener at localhost/42247] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 21:11:04,747 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 21:11:04,748 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 21:11:04,748 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/nfs.dump.dir in system properties and HBase conf 2023-07-24 21:11:04,748 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/java.io.tmpdir in system properties and HBase conf 2023-07-24 21:11:04,748 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 21:11:04,748 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 21:11:04,748 INFO [Listener at localhost/42247] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 21:11:04,752 WARN [Listener at localhost/42247] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 21:11:04,752 WARN [Listener at localhost/42247] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 21:11:04,793 DEBUG [Listener at localhost/42247-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101992bd9f8000a, quorum=127.0.0.1:59094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 21:11:04,793 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101992bd9f8000a, quorum=127.0.0.1:59094, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 21:11:04,802 WARN [Listener at localhost/42247] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:04,805 INFO [Listener at localhost/42247] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:04,812 INFO [Listener at localhost/42247] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/java.io.tmpdir/Jetty_localhost_38783_hdfs____qwkpef/webapp 2023-07-24 21:11:04,913 INFO [Listener at localhost/42247] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38783 2023-07-24 21:11:04,919 WARN [Listener at localhost/42247] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 21:11:04,919 WARN [Listener at localhost/42247] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 21:11:04,979 WARN [Listener at localhost/41467] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:05,031 WARN [Listener at localhost/41467] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:11:05,034 WARN [Listener at localhost/41467] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:05,035 INFO [Listener at localhost/41467] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:05,045 INFO [Listener at localhost/41467] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/java.io.tmpdir/Jetty_localhost_33215_datanode____.hbvov7/webapp 2023-07-24 21:11:05,155 INFO [Listener at localhost/41467] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33215 2023-07-24 21:11:05,179 WARN [Listener at localhost/46825] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:05,220 WARN [Listener at localhost/46825] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:11:05,222 WARN [Listener at localhost/46825] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:05,224 INFO [Listener at localhost/46825] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:05,228 INFO [Listener at localhost/46825] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/java.io.tmpdir/Jetty_localhost_43271_datanode____.ppe8rw/webapp 2023-07-24 21:11:05,348 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xed682d3844c6e3db: Processing first storage report for DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898 from datanode dc0523c6-7882-482b-bfa9-1378addf4a9c 2023-07-24 21:11:05,348 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xed682d3844c6e3db: from storage DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898 node DatanodeRegistration(127.0.0.1:37445, datanodeUuid=dc0523c6-7882-482b-bfa9-1378addf4a9c, infoPort=35795, infoSecurePort=0, ipcPort=46825, storageInfo=lv=-57;cid=testClusterID;nsid=470283398;c=1690233064755), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:05,349 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xed682d3844c6e3db: Processing first storage report for DS-ae7ddd48-0cf9-4542-8fb2-ebc3bfce45b2 from datanode dc0523c6-7882-482b-bfa9-1378addf4a9c 2023-07-24 21:11:05,349 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xed682d3844c6e3db: from storage DS-ae7ddd48-0cf9-4542-8fb2-ebc3bfce45b2 node DatanodeRegistration(127.0.0.1:37445, datanodeUuid=dc0523c6-7882-482b-bfa9-1378addf4a9c, infoPort=35795, infoSecurePort=0, ipcPort=46825, storageInfo=lv=-57;cid=testClusterID;nsid=470283398;c=1690233064755), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:05,372 INFO [Listener at localhost/46825] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43271 2023-07-24 21:11:05,394 WARN [Listener at localhost/41475] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:05,449 WARN [Listener at localhost/41475] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:11:05,454 WARN [Listener at localhost/41475] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:05,456 INFO [Listener at localhost/41475] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:05,485 INFO [Listener at localhost/41475] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/java.io.tmpdir/Jetty_localhost_44153_datanode____iytoqe/webapp 2023-07-24 21:11:05,560 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3889ea8297aa6e25: Processing first storage report for DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669 from datanode 2a2ab15a-4ccd-4591-969a-44ab806e3422 2023-07-24 21:11:05,560 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3889ea8297aa6e25: from storage DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669 node DatanodeRegistration(127.0.0.1:46285, datanodeUuid=2a2ab15a-4ccd-4591-969a-44ab806e3422, infoPort=45795, infoSecurePort=0, ipcPort=41475, storageInfo=lv=-57;cid=testClusterID;nsid=470283398;c=1690233064755), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:05,560 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3889ea8297aa6e25: Processing first storage report for DS-0011d633-b7b5-4b03-b79b-84bc8e048b50 from datanode 2a2ab15a-4ccd-4591-969a-44ab806e3422 2023-07-24 21:11:05,560 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3889ea8297aa6e25: from storage DS-0011d633-b7b5-4b03-b79b-84bc8e048b50 node DatanodeRegistration(127.0.0.1:46285, datanodeUuid=2a2ab15a-4ccd-4591-969a-44ab806e3422, infoPort=45795, infoSecurePort=0, ipcPort=41475, storageInfo=lv=-57;cid=testClusterID;nsid=470283398;c=1690233064755), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:05,612 INFO [Listener at localhost/41475] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44153 2023-07-24 21:11:05,622 WARN [Listener at localhost/36605] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:05,652 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:05,652 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 21:11:05,652 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 21:11:05,717 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3fa7121803f18643: Processing first storage report for DS-391a8eb3-426f-4457-b45d-02e8ed8f6152 from datanode 12303e03-e7ef-4b95-a514-4f1ebd9a6ff4 2023-07-24 21:11:05,717 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3fa7121803f18643: from storage DS-391a8eb3-426f-4457-b45d-02e8ed8f6152 node DatanodeRegistration(127.0.0.1:42189, datanodeUuid=12303e03-e7ef-4b95-a514-4f1ebd9a6ff4, infoPort=44427, infoSecurePort=0, ipcPort=36605, storageInfo=lv=-57;cid=testClusterID;nsid=470283398;c=1690233064755), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:05,717 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3fa7121803f18643: Processing first storage report for DS-fdda7fc9-4a81-4cd5-8c40-98e6a7d16c58 from datanode 12303e03-e7ef-4b95-a514-4f1ebd9a6ff4 2023-07-24 21:11:05,717 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3fa7121803f18643: from storage DS-fdda7fc9-4a81-4cd5-8c40-98e6a7d16c58 node DatanodeRegistration(127.0.0.1:42189, datanodeUuid=12303e03-e7ef-4b95-a514-4f1ebd9a6ff4, infoPort=44427, infoSecurePort=0, ipcPort=36605, storageInfo=lv=-57;cid=testClusterID;nsid=470283398;c=1690233064755), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:05,733 DEBUG [Listener at localhost/36605] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46 2023-07-24 21:11:05,735 INFO [Listener at localhost/36605] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/zookeeper_0, clientPort=53256, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 21:11:05,737 INFO [Listener at localhost/36605] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53256 2023-07-24 21:11:05,737 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,738 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,755 INFO [Listener at localhost/36605] util.FSUtils(471): Created version file at hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640 with version=8 2023-07-24 21:11:05,755 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/hbase-staging 2023-07-24 21:11:05,756 DEBUG [Listener at localhost/36605] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 21:11:05,756 DEBUG [Listener at localhost/36605] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 21:11:05,756 DEBUG [Listener at localhost/36605] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 21:11:05,756 DEBUG [Listener at localhost/36605] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 21:11:05,757 INFO [Listener at localhost/36605] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:05,757 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,757 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,757 INFO [Listener at localhost/36605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:05,757 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,757 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:05,758 INFO [Listener at localhost/36605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:05,758 INFO [Listener at localhost/36605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36583 2023-07-24 21:11:05,759 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,760 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,761 INFO [Listener at localhost/36605] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36583 connecting to ZooKeeper ensemble=127.0.0.1:53256 2023-07-24 21:11:05,768 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:365830x0, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:05,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36583-0x101992c540a0000 connected 2023-07-24 21:11:05,784 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:05,785 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:05,785 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:05,785 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36583 2023-07-24 21:11:05,786 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36583 2023-07-24 21:11:05,786 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36583 2023-07-24 21:11:05,786 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36583 2023-07-24 21:11:05,786 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36583 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:05,789 INFO [Listener at localhost/36605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:05,790 INFO [Listener at localhost/36605] http.HttpServer(1146): Jetty bound to port 39001 2023-07-24 21:11:05,790 INFO [Listener at localhost/36605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:05,792 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,792 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5823e00a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:05,793 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,793 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@19ed0121{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:05,802 INFO [Listener at localhost/36605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:05,803 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:05,803 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:05,804 INFO [Listener at localhost/36605] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:05,805 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,807 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@56a25f8{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 21:11:05,808 INFO [Listener at localhost/36605] server.AbstractConnector(333): Started ServerConnector@61d6c91f{HTTP/1.1, (http/1.1)}{0.0.0.0:39001} 2023-07-24 21:11:05,808 INFO [Listener at localhost/36605] server.Server(415): Started @36454ms 2023-07-24 21:11:05,808 INFO [Listener at localhost/36605] master.HMaster(444): hbase.rootdir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640, hbase.cluster.distributed=false 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:05,826 INFO [Listener at localhost/36605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:05,827 INFO [Listener at localhost/36605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33693 2023-07-24 21:11:05,827 INFO [Listener at localhost/36605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:05,828 DEBUG [Listener at localhost/36605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:05,829 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,830 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,831 INFO [Listener at localhost/36605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33693 connecting to ZooKeeper ensemble=127.0.0.1:53256 2023-07-24 21:11:05,836 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:336930x0, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:05,838 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33693-0x101992c540a0001 connected 2023-07-24 21:11:05,838 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:05,838 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:05,839 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:05,840 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33693 2023-07-24 21:11:05,840 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33693 2023-07-24 21:11:05,842 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33693 2023-07-24 21:11:05,842 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33693 2023-07-24 21:11:05,843 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33693 2023-07-24 21:11:05,844 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:05,845 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:05,845 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:05,845 INFO [Listener at localhost/36605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:05,845 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:05,846 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:05,846 INFO [Listener at localhost/36605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:05,847 INFO [Listener at localhost/36605] http.HttpServer(1146): Jetty bound to port 45865 2023-07-24 21:11:05,847 INFO [Listener at localhost/36605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:05,850 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,851 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@11ee97d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:05,851 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,851 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b389078{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:05,857 INFO [Listener at localhost/36605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:05,858 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:05,858 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:05,858 INFO [Listener at localhost/36605] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:05,859 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,860 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@19da0d72{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:05,861 INFO [Listener at localhost/36605] server.AbstractConnector(333): Started ServerConnector@14ac6c96{HTTP/1.1, (http/1.1)}{0.0.0.0:45865} 2023-07-24 21:11:05,861 INFO [Listener at localhost/36605] server.Server(415): Started @36507ms 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:05,873 INFO [Listener at localhost/36605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:05,874 INFO [Listener at localhost/36605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33655 2023-07-24 21:11:05,874 INFO [Listener at localhost/36605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:05,876 DEBUG [Listener at localhost/36605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:05,876 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,877 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,878 INFO [Listener at localhost/36605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33655 connecting to ZooKeeper ensemble=127.0.0.1:53256 2023-07-24 21:11:05,882 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:336550x0, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:05,883 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33655-0x101992c540a0002 connected 2023-07-24 21:11:05,883 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:05,884 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:05,885 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:05,886 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33655 2023-07-24 21:11:05,887 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33655 2023-07-24 21:11:05,887 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33655 2023-07-24 21:11:05,889 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33655 2023-07-24 21:11:05,890 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33655 2023-07-24 21:11:05,892 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:05,892 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:05,892 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:05,893 INFO [Listener at localhost/36605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:05,893 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:05,893 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:05,893 INFO [Listener at localhost/36605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:05,894 INFO [Listener at localhost/36605] http.HttpServer(1146): Jetty bound to port 40023 2023-07-24 21:11:05,894 INFO [Listener at localhost/36605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:05,895 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,895 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c64a668{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:05,896 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,896 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@d26bd46{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:05,900 INFO [Listener at localhost/36605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:05,901 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:05,902 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:05,902 INFO [Listener at localhost/36605] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:05,902 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,903 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@69b3dc86{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:05,904 INFO [Listener at localhost/36605] server.AbstractConnector(333): Started ServerConnector@29d5e39{HTTP/1.1, (http/1.1)}{0.0.0.0:40023} 2023-07-24 21:11:05,904 INFO [Listener at localhost/36605] server.Server(415): Started @36551ms 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:05,916 INFO [Listener at localhost/36605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:05,917 INFO [Listener at localhost/36605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39169 2023-07-24 21:11:05,917 INFO [Listener at localhost/36605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:05,919 DEBUG [Listener at localhost/36605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:05,919 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,920 INFO [Listener at localhost/36605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:05,921 INFO [Listener at localhost/36605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39169 connecting to ZooKeeper ensemble=127.0.0.1:53256 2023-07-24 21:11:05,925 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:391690x0, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:05,969 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39169-0x101992c540a0003 connected 2023-07-24 21:11:05,969 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:05,971 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:05,973 DEBUG [Listener at localhost/36605] zookeeper.ZKUtil(164): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:05,976 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39169 2023-07-24 21:11:05,976 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39169 2023-07-24 21:11:05,977 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39169 2023-07-24 21:11:05,980 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39169 2023-07-24 21:11:05,981 DEBUG [Listener at localhost/36605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39169 2023-07-24 21:11:05,984 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:05,984 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:05,984 INFO [Listener at localhost/36605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:05,984 INFO [Listener at localhost/36605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:05,984 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:05,984 INFO [Listener at localhost/36605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:05,985 INFO [Listener at localhost/36605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:05,985 INFO [Listener at localhost/36605] http.HttpServer(1146): Jetty bound to port 43365 2023-07-24 21:11:05,985 INFO [Listener at localhost/36605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:05,995 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,995 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6defc860{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:05,996 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:05,996 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5de4fb80{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:06,002 INFO [Listener at localhost/36605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:06,003 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:06,003 INFO [Listener at localhost/36605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:06,003 INFO [Listener at localhost/36605] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 21:11:06,004 INFO [Listener at localhost/36605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:06,005 INFO [Listener at localhost/36605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@471104d8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:06,007 INFO [Listener at localhost/36605] server.AbstractConnector(333): Started ServerConnector@692f1be5{HTTP/1.1, (http/1.1)}{0.0.0.0:43365} 2023-07-24 21:11:06,007 INFO [Listener at localhost/36605] server.Server(415): Started @36653ms 2023-07-24 21:11:06,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:06,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@64df5e88{HTTP/1.1, (http/1.1)}{0.0.0.0:37403} 2023-07-24 21:11:06,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36659ms 2023-07-24 21:11:06,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,014 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 21:11:06,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,016 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:06,016 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:06,016 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:06,016 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:06,017 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:11:06,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36583,1690233065756 from backup master directory 2023-07-24 21:11:06,020 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:11:06,020 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,020 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 21:11:06,020 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:06,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,035 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/hbase.id with ID: dd6ce468-4c3a-415e-89e9-32af92fd3c18 2023-07-24 21:11:06,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:06,051 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1c3894ca to 127.0.0.1:53256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:06,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e221844, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:06,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:06,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 21:11:06,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:06,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store-tmp 2023-07-24 21:11:06,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:06,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 21:11:06,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:06,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:06,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 21:11:06,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:06,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:06,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:06,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/WALs/jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36583%2C1690233065756, suffix=, logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/WALs/jenkins-hbase4.apache.org,36583,1690233065756, archiveDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/oldWALs, maxLogs=10 2023-07-24 21:11:06,095 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK] 2023-07-24 21:11:06,096 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK] 2023-07-24 21:11:06,096 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK] 2023-07-24 21:11:06,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/WALs/jenkins-hbase4.apache.org,36583,1690233065756/jenkins-hbase4.apache.org%2C36583%2C1690233065756.1690233066079 2023-07-24 21:11:06,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK], DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK], DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK]] 2023-07-24 21:11:06,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:06,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:06,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:06,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:06,104 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:06,106 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 21:11:06,106 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 21:11:06,107 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:06,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:06,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:06,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9888855200, jitterRate=-0.07902859151363373}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:06,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:06,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 21:11:06,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 21:11:06,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 21:11:06,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 21:11:06,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 21:11:06,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 21:11:06,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 21:11:06,118 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 21:11:06,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 21:11:06,120 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 21:11:06,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 21:11:06,121 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 21:11:06,122 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 21:11:06,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 21:11:06,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 21:11:06,125 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:06,125 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:06,125 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:06,125 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,125 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:06,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36583,1690233065756, sessionid=0x101992c540a0000, setting cluster-up flag (Was=false) 2023-07-24 21:11:06,132 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,137 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 21:11:06,137 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,140 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 21:11:06,145 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:06,146 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.hbase-snapshot/.tmp 2023-07-24 21:11:06,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 21:11:06,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 21:11:06,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 21:11:06,148 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:06,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 21:11:06,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 21:11:06,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 21:11:06,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 21:11:06,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 21:11:06,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 21:11:06,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690233096194 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 21:11:06,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,195 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 21:11:06,195 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 21:11:06,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 21:11:06,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 21:11:06,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 21:11:06,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 21:11:06,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 21:11:06,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233066206,5,FailOnTimeoutGroup] 2023-07-24 21:11:06,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233066206,5,FailOnTimeoutGroup] 2023-07-24 21:11:06,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 21:11:06,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,208 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(951): ClusterId : dd6ce468-4c3a-415e-89e9-32af92fd3c18 2023-07-24 21:11:06,208 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:06,210 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:06,210 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:06,211 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:06,212 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:06,215 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ReadOnlyZKClient(139): Connect 0x017355dd to 127.0.0.1:53256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:06,216 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(951): ClusterId : dd6ce468-4c3a-415e-89e9-32af92fd3c18 2023-07-24 21:11:06,216 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:06,219 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:06,219 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:06,220 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(951): ClusterId : dd6ce468-4c3a-415e-89e9-32af92fd3c18 2023-07-24 21:11:06,221 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:06,222 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:06,222 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:06,222 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:06,224 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:06,233 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ReadOnlyZKClient(139): Connect 0x108c95bd to 127.0.0.1:53256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:06,233 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ReadOnlyZKClient(139): Connect 0x7e40c91c to 127.0.0.1:53256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:06,292 DEBUG [RS:1;jenkins-hbase4:33655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e4b69dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:06,292 DEBUG [RS:1;jenkins-hbase4:33655] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a74fecd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:06,297 DEBUG [RS:0;jenkins-hbase4:33693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b8b3ead, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:06,298 DEBUG [RS:0;jenkins-hbase4:33693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e89b1b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:06,303 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33655 2023-07-24 21:11:06,304 INFO [RS:1;jenkins-hbase4:33655] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:06,304 INFO [RS:1;jenkins-hbase4:33655] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:06,304 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:06,304 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36583,1690233065756 with isa=jenkins-hbase4.apache.org/172.31.14.131:33655, startcode=1690233065872 2023-07-24 21:11:06,304 DEBUG [RS:1;jenkins-hbase4:33655] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:06,310 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33693 2023-07-24 21:11:06,310 INFO [RS:0;jenkins-hbase4:33693] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:06,310 INFO [RS:0;jenkins-hbase4:33693] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:06,310 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:06,311 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36583,1690233065756 with isa=jenkins-hbase4.apache.org/172.31.14.131:33693, startcode=1690233065825 2023-07-24 21:11:06,311 DEBUG [RS:0;jenkins-hbase4:33693] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:06,321 DEBUG [RS:2;jenkins-hbase4:39169] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@708728a4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:06,322 DEBUG [RS:2;jenkins-hbase4:39169] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74b8a26c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:06,322 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50273, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:06,322 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33767, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:06,325 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36583] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,325 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:06,325 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 21:11:06,333 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36583] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,333 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:06,333 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 21:11:06,334 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640 2023-07-24 21:11:06,334 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41467 2023-07-24 21:11:06,334 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39001 2023-07-24 21:11:06,335 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640 2023-07-24 21:11:06,335 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41467 2023-07-24 21:11:06,335 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39001 2023-07-24 21:11:06,335 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:06,339 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 21:11:06,342 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:06,342 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:06,343 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640 2023-07-24 21:11:06,344 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ZKUtil(162): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,344 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39169 2023-07-24 21:11:06,344 WARN [RS:1;jenkins-hbase4:33655] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:06,344 INFO [RS:2;jenkins-hbase4:39169] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:06,344 INFO [RS:1;jenkins-hbase4:33655] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:06,344 INFO [RS:2;jenkins-hbase4:39169] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:06,344 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:06,344 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,345 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ZKUtil(162): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,345 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36583,1690233065756 with isa=jenkins-hbase4.apache.org/172.31.14.131:39169, startcode=1690233065915 2023-07-24 21:11:06,345 WARN [RS:0;jenkins-hbase4:33693] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:06,345 DEBUG [RS:2;jenkins-hbase4:39169] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:06,345 INFO [RS:0;jenkins-hbase4:33693] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:06,345 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,347 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33499, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:06,347 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36583] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,347 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:06,347 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 21:11:06,347 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640 2023-07-24 21:11:06,348 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41467 2023-07-24 21:11:06,348 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39001 2023-07-24 21:11:06,349 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:06,349 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ZKUtil(162): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,349 WARN [RS:2;jenkins-hbase4:39169] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:06,349 INFO [RS:2;jenkins-hbase4:39169] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:06,349 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,350 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33693,1690233065825] 2023-07-24 21:11:06,350 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33655,1690233065872] 2023-07-24 21:11:06,351 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39169,1690233065915] 2023-07-24 21:11:06,382 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ZKUtil(162): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,382 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ZKUtil(162): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,382 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ZKUtil(162): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,382 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ZKUtil(162): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,383 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ZKUtil(162): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,383 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ZKUtil(162): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,383 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ZKUtil(162): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,383 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ZKUtil(162): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,383 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ZKUtil(162): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,384 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:06,384 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:06,384 INFO [RS:0;jenkins-hbase4:33693] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:06,384 INFO [RS:1;jenkins-hbase4:33655] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:06,384 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:06,393 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:06,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:11:06,397 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/info 2023-07-24 21:11:06,398 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:11:06,399 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,399 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:11:06,400 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:11:06,401 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:11:06,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:11:06,403 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/table 2023-07-24 21:11:06,404 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:11:06,404 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,407 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740 2023-07-24 21:11:06,407 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740 2023-07-24 21:11:06,409 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:11:06,412 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:11:06,415 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:06,416 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11094139040, jitterRate=0.03322221338748932}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:11:06,416 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:11:06,416 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:11:06,416 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:11:06,416 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:11:06,416 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:11:06,416 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:11:06,417 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:06,417 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:11:06,418 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 21:11:06,418 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 21:11:06,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 21:11:06,420 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 21:11:06,422 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 21:11:06,440 INFO [RS:2;jenkins-hbase4:39169] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:06,441 INFO [RS:0;jenkins-hbase4:33693] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:06,441 INFO [RS:1;jenkins-hbase4:33655] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:06,442 INFO [RS:0;jenkins-hbase4:33693] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:06,442 INFO [RS:1;jenkins-hbase4:33655] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:06,442 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,442 INFO [RS:2;jenkins-hbase4:39169] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:06,442 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,442 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:06,443 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:06,443 INFO [RS:2;jenkins-hbase4:39169] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:06,443 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,444 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:06,445 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,445 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,445 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:06,446 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:06,446 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,446 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:2;jenkins-hbase4:39169] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:0;jenkins-hbase4:33693] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:06,447 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,447 DEBUG [RS:1;jenkins-hbase4:33655] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:06,459 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,460 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,460 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,460 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,461 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,462 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,475 INFO [RS:2;jenkins-hbase4:39169] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:06,475 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39169,1690233065915-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,475 INFO [RS:0;jenkins-hbase4:33693] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:06,475 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33693,1690233065825-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,478 INFO [RS:1;jenkins-hbase4:33655] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:06,479 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33655,1690233065872-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,487 INFO [RS:2;jenkins-hbase4:39169] regionserver.Replication(203): jenkins-hbase4.apache.org,39169,1690233065915 started 2023-07-24 21:11:06,487 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39169,1690233065915, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39169, sessionid=0x101992c540a0003 2023-07-24 21:11:06,487 INFO [RS:0;jenkins-hbase4:33693] regionserver.Replication(203): jenkins-hbase4.apache.org,33693,1690233065825 started 2023-07-24 21:11:06,487 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:06,487 DEBUG [RS:2;jenkins-hbase4:39169] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,487 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33693,1690233065825, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33693, sessionid=0x101992c540a0001 2023-07-24 21:11:06,487 DEBUG [RS:2;jenkins-hbase4:39169] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39169,1690233065915' 2023-07-24 21:11:06,487 DEBUG [RS:2;jenkins-hbase4:39169] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:06,487 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:06,487 DEBUG [RS:0;jenkins-hbase4:33693] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,487 DEBUG [RS:0;jenkins-hbase4:33693] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33693,1690233065825' 2023-07-24 21:11:06,487 DEBUG [RS:0;jenkins-hbase4:33693] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:06,488 DEBUG [RS:2;jenkins-hbase4:39169] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:06,488 DEBUG [RS:0;jenkins-hbase4:33693] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:06,488 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:06,488 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:06,488 DEBUG [RS:2;jenkins-hbase4:39169] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:06,488 DEBUG [RS:2;jenkins-hbase4:39169] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39169,1690233065915' 2023-07-24 21:11:06,488 DEBUG [RS:2;jenkins-hbase4:39169] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:06,488 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:06,488 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:06,488 DEBUG [RS:0;jenkins-hbase4:33693] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,488 DEBUG [RS:0;jenkins-hbase4:33693] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33693,1690233065825' 2023-07-24 21:11:06,488 DEBUG [RS:0;jenkins-hbase4:33693] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:06,489 DEBUG [RS:2;jenkins-hbase4:39169] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:06,489 DEBUG [RS:2;jenkins-hbase4:39169] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:06,489 DEBUG [RS:0;jenkins-hbase4:33693] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:06,489 INFO [RS:2;jenkins-hbase4:39169] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 21:11:06,490 DEBUG [RS:0;jenkins-hbase4:33693] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:06,490 INFO [RS:0;jenkins-hbase4:33693] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 21:11:06,491 INFO [RS:1;jenkins-hbase4:33655] regionserver.Replication(203): jenkins-hbase4.apache.org,33655,1690233065872 started 2023-07-24 21:11:06,491 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33655,1690233065872, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33655, sessionid=0x101992c540a0002 2023-07-24 21:11:06,491 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:06,491 DEBUG [RS:1;jenkins-hbase4:33655] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,492 DEBUG [RS:1;jenkins-hbase4:33655] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33655,1690233065872' 2023-07-24 21:11:06,492 DEBUG [RS:1;jenkins-hbase4:33655] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:06,492 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,492 DEBUG [RS:1;jenkins-hbase4:33655] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:06,492 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,492 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:06,492 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:06,492 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ZKUtil(398): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 21:11:06,492 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ZKUtil(398): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 21:11:06,493 INFO [RS:0;jenkins-hbase4:33693] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 21:11:06,493 DEBUG [RS:1;jenkins-hbase4:33655] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,493 DEBUG [RS:1;jenkins-hbase4:33655] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33655,1690233065872' 2023-07-24 21:11:06,493 DEBUG [RS:1;jenkins-hbase4:33655] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:06,493 INFO [RS:2;jenkins-hbase4:39169] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 21:11:06,493 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,493 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,493 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,493 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,495 DEBUG [RS:1;jenkins-hbase4:33655] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:06,495 DEBUG [RS:1;jenkins-hbase4:33655] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:06,495 INFO [RS:1;jenkins-hbase4:33655] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 21:11:06,495 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,495 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ZKUtil(398): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 21:11:06,495 INFO [RS:1;jenkins-hbase4:33655] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 21:11:06,495 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,495 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,573 DEBUG [jenkins-hbase4:36583] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 21:11:06,573 DEBUG [jenkins-hbase4:36583] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:06,573 DEBUG [jenkins-hbase4:36583] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:06,573 DEBUG [jenkins-hbase4:36583] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:06,573 DEBUG [jenkins-hbase4:36583] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:06,573 DEBUG [jenkins-hbase4:36583] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:06,574 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33693,1690233065825, state=OPENING 2023-07-24 21:11:06,575 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 21:11:06,577 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:06,579 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33693,1690233065825}] 2023-07-24 21:11:06,579 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:11:06,597 INFO [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33693%2C1690233065825, suffix=, logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33693,1690233065825, archiveDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs, maxLogs=32 2023-07-24 21:11:06,597 INFO [RS:2;jenkins-hbase4:39169] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39169%2C1690233065915, suffix=, logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,39169,1690233065915, archiveDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs, maxLogs=32 2023-07-24 21:11:06,597 INFO [RS:1;jenkins-hbase4:33655] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33655%2C1690233065872, suffix=, logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33655,1690233065872, archiveDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs, maxLogs=32 2023-07-24 21:11:06,630 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK] 2023-07-24 21:11:06,631 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK] 2023-07-24 21:11:06,631 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK] 2023-07-24 21:11:06,631 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK] 2023-07-24 21:11:06,632 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK] 2023-07-24 21:11:06,632 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK] 2023-07-24 21:11:06,632 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK] 2023-07-24 21:11:06,632 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK] 2023-07-24 21:11:06,632 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK] 2023-07-24 21:11:06,643 INFO [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33693,1690233065825/jenkins-hbase4.apache.org%2C33693%2C1690233065825.1690233066601 2023-07-24 21:11:06,644 INFO [RS:1;jenkins-hbase4:33655] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33655,1690233065872/jenkins-hbase4.apache.org%2C33655%2C1690233065872.1690233066607 2023-07-24 21:11:06,644 INFO [RS:2;jenkins-hbase4:39169] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,39169,1690233065915/jenkins-hbase4.apache.org%2C39169%2C1690233065915.1690233066607 2023-07-24 21:11:06,646 DEBUG [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK], DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK], DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK]] 2023-07-24 21:11:06,647 DEBUG [RS:1;jenkins-hbase4:33655] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK], DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK], DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK]] 2023-07-24 21:11:06,647 DEBUG [RS:2;jenkins-hbase4:39169] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK], DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK], DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK]] 2023-07-24 21:11:06,733 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:06,734 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:11:06,735 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38940, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:11:06,740 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 21:11:06,740 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:06,742 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33693%2C1690233065825.meta, suffix=.meta, logDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33693,1690233065825, archiveDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs, maxLogs=32 2023-07-24 21:11:06,759 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK] 2023-07-24 21:11:06,759 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK] 2023-07-24 21:11:06,759 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK] 2023-07-24 21:11:06,762 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/WALs/jenkins-hbase4.apache.org,33693,1690233065825/jenkins-hbase4.apache.org%2C33693%2C1690233065825.meta.1690233066743.meta 2023-07-24 21:11:06,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46285,DS-eb8b09c5-1d95-43f5-b9c5-79d021a90669,DISK], DatanodeInfoWithStorage[127.0.0.1:42189,DS-391a8eb3-426f-4457-b45d-02e8ed8f6152,DISK], DatanodeInfoWithStorage[127.0.0.1:37445,DS-d0850649-88f3-4b9a-bfcf-dede6cb1a898,DISK]] 2023-07-24 21:11:06,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:06,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:11:06,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 21:11:06,763 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 21:11:06,763 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 21:11:06,763 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:06,763 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 21:11:06,763 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 21:11:06,764 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:11:06,765 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/info 2023-07-24 21:11:06,765 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/info 2023-07-24 21:11:06,765 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:11:06,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:11:06,767 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:11:06,767 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:11:06,767 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:11:06,768 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,768 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:11:06,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/table 2023-07-24 21:11:06,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/table 2023-07-24 21:11:06,770 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:11:06,770 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:06,771 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740 2023-07-24 21:11:06,772 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740 2023-07-24 21:11:06,774 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:11:06,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:11:06,776 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10578685440, jitterRate=-0.014783143997192383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:11:06,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:11:06,777 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690233066733 2023-07-24 21:11:06,782 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 21:11:06,782 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 21:11:06,783 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33693,1690233065825, state=OPEN 2023-07-24 21:11:06,784 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 21:11:06,784 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:11:06,788 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 21:11:06,788 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33693,1690233065825 in 207 msec 2023-07-24 21:11:06,794 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:06,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 21:11:06,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-07-24 21:11:06,797 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 647 msec 2023-07-24 21:11:06,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690233066797, completionTime=-1 2023-07-24 21:11:06,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 21:11:06,797 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38954, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:06,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 21:11:06,799 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:06,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 21:11:06,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690233126800 2023-07-24 21:11:06,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690233186800 2023-07-24 21:11:06,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 2 msec 2023-07-24 21:11:06,801 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 21:11:06,802 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 21:11:06,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36583,1690233065756-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36583,1690233065756-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36583,1690233065756-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36583, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:06,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 21:11:06,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:06,807 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 21:11:06,807 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:06,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 21:11:06,809 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:06,809 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:06,810 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:06,811 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:06,811 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d empty. 2023-07-24 21:11:06,811 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:06,811 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 21:11:06,812 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:06,812 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2 empty. 2023-07-24 21:11:06,813 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:06,813 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 21:11:06,832 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:06,832 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:06,833 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 635e2023d15573ead56e61286e0aa7a2, NAME => 'hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp 2023-07-24 21:11:06,833 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f3e8d4d6c573151fa50ba9c27c60ef3d, NAME => 'hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp 2023-07-24 21:11:06,847 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:06,847 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f3e8d4d6c573151fa50ba9c27c60ef3d, disabling compactions & flushes 2023-07-24 21:11:06,847 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:06,847 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:06,847 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. after waiting 0 ms 2023-07-24 21:11:06,847 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:06,847 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:06,847 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f3e8d4d6c573151fa50ba9c27c60ef3d: 2023-07-24 21:11:06,855 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:06,855 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 635e2023d15573ead56e61286e0aa7a2, disabling compactions & flushes 2023-07-24 21:11:06,855 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:06,855 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:06,855 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. after waiting 0 ms 2023-07-24 21:11:06,856 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:06,856 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:06,856 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 635e2023d15573ead56e61286e0aa7a2: 2023-07-24 21:11:06,856 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:06,857 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233066857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233066857"}]},"ts":"1690233066857"} 2023-07-24 21:11:06,859 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:06,860 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:06,860 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233066860"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233066860"}]},"ts":"1690233066860"} 2023-07-24 21:11:06,860 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:06,860 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233066860"}]},"ts":"1690233066860"} 2023-07-24 21:11:06,861 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:06,862 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:06,862 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 21:11:06,862 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233066862"}]},"ts":"1690233066862"} 2023-07-24 21:11:06,864 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 21:11:06,865 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:06,866 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:06,866 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:06,866 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:06,866 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:06,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f3e8d4d6c573151fa50ba9c27c60ef3d, ASSIGN}] 2023-07-24 21:11:06,868 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f3e8d4d6c573151fa50ba9c27c60ef3d, ASSIGN 2023-07-24 21:11:06,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:06,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:06,868 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:06,869 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:06,869 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:06,869 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=635e2023d15573ead56e61286e0aa7a2, ASSIGN}] 2023-07-24 21:11:06,869 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f3e8d4d6c573151fa50ba9c27c60ef3d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33655,1690233065872; forceNewPlan=false, retain=false 2023-07-24 21:11:06,870 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=635e2023d15573ead56e61286e0aa7a2, ASSIGN 2023-07-24 21:11:06,870 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=635e2023d15573ead56e61286e0aa7a2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33655,1690233065872; forceNewPlan=false, retain=false 2023-07-24 21:11:06,871 INFO [jenkins-hbase4:36583] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 21:11:06,873 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f3e8d4d6c573151fa50ba9c27c60ef3d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,873 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233066873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233066873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233066873"}]},"ts":"1690233066873"} 2023-07-24 21:11:06,873 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=635e2023d15573ead56e61286e0aa7a2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:06,873 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233066873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233066873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233066873"}]},"ts":"1690233066873"} 2023-07-24 21:11:06,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure f3e8d4d6c573151fa50ba9c27c60ef3d, server=jenkins-hbase4.apache.org,33655,1690233065872}] 2023-07-24 21:11:06,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 635e2023d15573ead56e61286e0aa7a2, server=jenkins-hbase4.apache.org,33655,1690233065872}] 2023-07-24 21:11:07,027 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:07,027 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:11:07,029 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:11:07,035 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:07,035 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 635e2023d15573ead56e61286e0aa7a2, NAME => 'hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:07,036 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,036 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:07,036 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,036 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,037 INFO [StoreOpener-635e2023d15573ead56e61286e0aa7a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,039 DEBUG [StoreOpener-635e2023d15573ead56e61286e0aa7a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/info 2023-07-24 21:11:07,039 DEBUG [StoreOpener-635e2023d15573ead56e61286e0aa7a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/info 2023-07-24 21:11:07,039 INFO [StoreOpener-635e2023d15573ead56e61286e0aa7a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 635e2023d15573ead56e61286e0aa7a2 columnFamilyName info 2023-07-24 21:11:07,040 INFO [StoreOpener-635e2023d15573ead56e61286e0aa7a2-1] regionserver.HStore(310): Store=635e2023d15573ead56e61286e0aa7a2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:07,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:07,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:07,046 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 635e2023d15573ead56e61286e0aa7a2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9773715680, jitterRate=-0.08975179493427277}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:07,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 635e2023d15573ead56e61286e0aa7a2: 2023-07-24 21:11:07,047 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2., pid=9, masterSystemTime=1690233067027 2023-07-24 21:11:07,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:07,051 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:07,051 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:07,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f3e8d4d6c573151fa50ba9c27c60ef3d, NAME => 'hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:07,051 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=635e2023d15573ead56e61286e0aa7a2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:07,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:11:07,051 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233067051"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233067051"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233067051"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233067051"}]},"ts":"1690233067051"} 2023-07-24 21:11:07,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. service=MultiRowMutationService 2023-07-24 21:11:07,051 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 21:11:07,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:07,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,054 INFO [StoreOpener-f3e8d4d6c573151fa50ba9c27c60ef3d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,055 DEBUG [StoreOpener-f3e8d4d6c573151fa50ba9c27c60ef3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/m 2023-07-24 21:11:07,056 DEBUG [StoreOpener-f3e8d4d6c573151fa50ba9c27c60ef3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/m 2023-07-24 21:11:07,056 INFO [StoreOpener-f3e8d4d6c573151fa50ba9c27c60ef3d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f3e8d4d6c573151fa50ba9c27c60ef3d columnFamilyName m 2023-07-24 21:11:07,057 INFO [StoreOpener-f3e8d4d6c573151fa50ba9c27c60ef3d-1] regionserver.HStore(310): Store=f3e8d4d6c573151fa50ba9c27c60ef3d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:07,057 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,058 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 21:11:07,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 635e2023d15573ead56e61286e0aa7a2, server=jenkins-hbase4.apache.org,33655,1690233065872 in 178 msec 2023-07-24 21:11:07,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:07,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 21:11:07,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=635e2023d15573ead56e61286e0aa7a2, ASSIGN in 191 msec 2023-07-24 21:11:07,063 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:07,063 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233067063"}]},"ts":"1690233067063"} 2023-07-24 21:11:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:07,065 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f3e8d4d6c573151fa50ba9c27c60ef3d; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@446eeef6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:07,065 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f3e8d4d6c573151fa50ba9c27c60ef3d: 2023-07-24 21:11:07,067 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d., pid=8, masterSystemTime=1690233067027 2023-07-24 21:11:07,067 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 21:11:07,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:07,068 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:07,068 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f3e8d4d6c573151fa50ba9c27c60ef3d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:07,069 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233067068"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233067068"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233067068"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233067068"}]},"ts":"1690233067068"} 2023-07-24 21:11:07,069 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:07,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 263 msec 2023-07-24 21:11:07,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 21:11:07,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure f3e8d4d6c573151fa50ba9c27c60ef3d, server=jenkins-hbase4.apache.org,33655,1690233065872 in 196 msec 2023-07-24 21:11:07,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-24 21:11:07,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f3e8d4d6c573151fa50ba9c27c60ef3d, ASSIGN in 206 msec 2023-07-24 21:11:07,074 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:07,074 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233067074"}]},"ts":"1690233067074"} 2023-07-24 21:11:07,076 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 21:11:07,081 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:07,082 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 281 msec 2023-07-24 21:11:07,104 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:07,105 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42716, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:07,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 21:11:07,109 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 21:11:07,109 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 21:11:07,110 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:07,110 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:07,115 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:07,115 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:07,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 21:11:07,118 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:11:07,119 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36583,1690233065756] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 21:11:07,130 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:07,133 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-07-24 21:11:07,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 21:11:07,147 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:07,150 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-24 21:11:07,162 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 21:11:07,166 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 21:11:07,166 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.146sec 2023-07-24 21:11:07,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 21:11:07,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:07,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 21:11:07,170 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 21:11:07,173 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:07,174 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 21:11:07,175 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:07,176 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,177 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c empty. 2023-07-24 21:11:07,178 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,178 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 21:11:07,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 21:11:07,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 21:11:07,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:07,183 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:07,183 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 21:11:07,183 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 21:11:07,183 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36583,1690233065756-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 21:11:07,183 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36583,1690233065756-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 21:11:07,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 21:11:07,207 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:07,210 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 158b9f54e8204dcebcb1991d735b0d7c, NAME => 'hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp 2023-07-24 21:11:07,222 DEBUG [Listener at localhost/36605] zookeeper.ReadOnlyZKClient(139): Connect 0x7f0cff46 to 127.0.0.1:53256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:07,247 DEBUG [Listener at localhost/36605] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c6a6d28, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:07,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:07,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 158b9f54e8204dcebcb1991d735b0d7c, disabling compactions & flushes 2023-07-24 21:11:07,256 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. after waiting 0 ms 2023-07-24 21:11:07,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,256 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 158b9f54e8204dcebcb1991d735b0d7c: 2023-07-24 21:11:07,257 DEBUG [hconnection-0x27cec564-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:07,259 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:07,260 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690233067260"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233067260"}]},"ts":"1690233067260"} 2023-07-24 21:11:07,260 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38956, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:07,262 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:07,262 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:07,262 INFO [Listener at localhost/36605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:07,263 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:07,266 DEBUG [Listener at localhost/36605] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 21:11:07,266 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233067266"}]},"ts":"1690233067266"} 2023-07-24 21:11:07,267 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 21:11:07,268 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59632, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 21:11:07,273 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:07,273 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:07,273 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:07,273 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:07,273 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:07,273 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=158b9f54e8204dcebcb1991d735b0d7c, ASSIGN}] 2023-07-24 21:11:07,273 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 21:11:07,273 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:07,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 21:11:07,274 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=158b9f54e8204dcebcb1991d735b0d7c, ASSIGN 2023-07-24 21:11:07,275 DEBUG [Listener at localhost/36605] zookeeper.ReadOnlyZKClient(139): Connect 0x7fd43610 to 127.0.0.1:53256 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:07,278 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=158b9f54e8204dcebcb1991d735b0d7c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39169,1690233065915; forceNewPlan=false, retain=false 2023-07-24 21:11:07,282 DEBUG [Listener at localhost/36605] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ee0396b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:07,282 INFO [Listener at localhost/36605] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53256 2023-07-24 21:11:07,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-24 21:11:07,296 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:07,298 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101992c540a000a connected 2023-07-24 21:11:07,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-24 21:11:07,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 21:11:07,313 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:07,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 18 msec 2023-07-24 21:11:07,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 21:11:07,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:07,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-24 21:11:07,417 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:07,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-24 21:11:07,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 21:11:07,420 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:07,420 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:11:07,422 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:07,423 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,424 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e empty. 2023-07-24 21:11:07,424 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,424 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 21:11:07,430 INFO [jenkins-hbase4:36583] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:11:07,431 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=158b9f54e8204dcebcb1991d735b0d7c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:07,431 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690233067431"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233067431"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233067431"}]},"ts":"1690233067431"} 2023-07-24 21:11:07,434 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 158b9f54e8204dcebcb1991d735b0d7c, server=jenkins-hbase4.apache.org,39169,1690233065915}] 2023-07-24 21:11:07,450 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:07,451 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6e255690c5151703c1dc2584df251c1e, NAME => 'np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp 2023-07-24 21:11:07,466 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:07,466 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 6e255690c5151703c1dc2584df251c1e, disabling compactions & flushes 2023-07-24 21:11:07,466 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,466 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,466 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. after waiting 0 ms 2023-07-24 21:11:07,466 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,466 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,466 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 6e255690c5151703c1dc2584df251c1e: 2023-07-24 21:11:07,470 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:07,471 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233067471"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233067471"}]},"ts":"1690233067471"} 2023-07-24 21:11:07,473 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:07,473 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:07,474 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233067473"}]},"ts":"1690233067473"} 2023-07-24 21:11:07,475 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-24 21:11:07,478 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:07,478 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:07,478 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:07,478 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:07,478 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:07,478 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, ASSIGN}] 2023-07-24 21:11:07,479 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, ASSIGN 2023-07-24 21:11:07,480 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33693,1690233065825; forceNewPlan=false, retain=false 2023-07-24 21:11:07,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 21:11:07,587 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:07,587 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:11:07,588 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39326, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:11:07,592 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 158b9f54e8204dcebcb1991d735b0d7c, NAME => 'hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:07,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:07,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,593 INFO [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,595 DEBUG [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c/q 2023-07-24 21:11:07,595 DEBUG [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c/q 2023-07-24 21:11:07,595 INFO [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 158b9f54e8204dcebcb1991d735b0d7c columnFamilyName q 2023-07-24 21:11:07,596 INFO [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] regionserver.HStore(310): Store=158b9f54e8204dcebcb1991d735b0d7c/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:07,596 INFO [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,597 DEBUG [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c/u 2023-07-24 21:11:07,597 DEBUG [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c/u 2023-07-24 21:11:07,597 INFO [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 158b9f54e8204dcebcb1991d735b0d7c columnFamilyName u 2023-07-24 21:11:07,598 INFO [StoreOpener-158b9f54e8204dcebcb1991d735b0d7c-1] regionserver.HStore(310): Store=158b9f54e8204dcebcb1991d735b0d7c/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:07,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 21:11:07,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:07,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:07,605 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 158b9f54e8204dcebcb1991d735b0d7c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10906712480, jitterRate=0.015766754746437073}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 21:11:07,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 158b9f54e8204dcebcb1991d735b0d7c: 2023-07-24 21:11:07,605 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c., pid=16, masterSystemTime=1690233067587 2023-07-24 21:11:07,608 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,609 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:07,609 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=158b9f54e8204dcebcb1991d735b0d7c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:07,609 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690233067609"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233067609"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233067609"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233067609"}]},"ts":"1690233067609"} 2023-07-24 21:11:07,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-24 21:11:07,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 158b9f54e8204dcebcb1991d735b0d7c, server=jenkins-hbase4.apache.org,39169,1690233065915 in 177 msec 2023-07-24 21:11:07,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 21:11:07,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=158b9f54e8204dcebcb1991d735b0d7c, ASSIGN in 339 msec 2023-07-24 21:11:07,614 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:07,614 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233067614"}]},"ts":"1690233067614"} 2023-07-24 21:11:07,615 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 21:11:07,618 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:07,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 449 msec 2023-07-24 21:11:07,630 INFO [jenkins-hbase4:36583] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:11:07,631 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6e255690c5151703c1dc2584df251c1e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:07,631 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233067631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233067631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233067631"}]},"ts":"1690233067631"} 2023-07-24 21:11:07,632 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 6e255690c5151703c1dc2584df251c1e, server=jenkins-hbase4.apache.org,33693,1690233065825}] 2023-07-24 21:11:07,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 21:11:07,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e255690c5151703c1dc2584df251c1e, NAME => 'np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:07,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:07,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,789 INFO [StoreOpener-6e255690c5151703c1dc2584df251c1e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,791 DEBUG [StoreOpener-6e255690c5151703c1dc2584df251c1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/np1/table1/6e255690c5151703c1dc2584df251c1e/fam1 2023-07-24 21:11:07,791 DEBUG [StoreOpener-6e255690c5151703c1dc2584df251c1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/np1/table1/6e255690c5151703c1dc2584df251c1e/fam1 2023-07-24 21:11:07,791 INFO [StoreOpener-6e255690c5151703c1dc2584df251c1e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e255690c5151703c1dc2584df251c1e columnFamilyName fam1 2023-07-24 21:11:07,792 INFO [StoreOpener-6e255690c5151703c1dc2584df251c1e-1] regionserver.HStore(310): Store=6e255690c5151703c1dc2584df251c1e/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:07,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/np1/table1/6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/np1/table1/6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:07,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/np1/table1/6e255690c5151703c1dc2584df251c1e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:07,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6e255690c5151703c1dc2584df251c1e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9598396480, jitterRate=-0.10607966780662537}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:07,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6e255690c5151703c1dc2584df251c1e: 2023-07-24 21:11:07,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e., pid=18, masterSystemTime=1690233067783 2023-07-24 21:11:07,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,801 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:07,801 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6e255690c5151703c1dc2584df251c1e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:07,801 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233067801"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233067801"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233067801"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233067801"}]},"ts":"1690233067801"} 2023-07-24 21:11:07,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 21:11:07,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 6e255690c5151703c1dc2584df251c1e, server=jenkins-hbase4.apache.org,33693,1690233065825 in 170 msec 2023-07-24 21:11:07,805 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-24 21:11:07,805 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, ASSIGN in 326 msec 2023-07-24 21:11:07,806 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:07,806 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233067806"}]},"ts":"1690233067806"} 2023-07-24 21:11:07,807 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-24 21:11:07,810 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:07,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 396 msec 2023-07-24 21:11:08,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 21:11:08,022 INFO [Listener at localhost/36605] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-24 21:11:08,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:08,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-24 21:11:08,026 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:08,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-24 21:11:08,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 21:11:08,042 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:08,044 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:08,046 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-24 21:11:08,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 21:11:08,130 INFO [Listener at localhost/36605] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-24 21:11:08,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:08,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:08,133 INFO [Listener at localhost/36605] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-24 21:11:08,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-24 21:11:08,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-24 21:11:08,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 21:11:08,137 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233068137"}]},"ts":"1690233068137"} 2023-07-24 21:11:08,138 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-24 21:11:08,139 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-24 21:11:08,140 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, UNASSIGN}] 2023-07-24 21:11:08,141 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, UNASSIGN 2023-07-24 21:11:08,141 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6e255690c5151703c1dc2584df251c1e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:08,142 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233068141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233068141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233068141"}]},"ts":"1690233068141"} 2023-07-24 21:11:08,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 6e255690c5151703c1dc2584df251c1e, server=jenkins-hbase4.apache.org,33693,1690233065825}] 2023-07-24 21:11:08,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 21:11:08,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:08,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6e255690c5151703c1dc2584df251c1e, disabling compactions & flushes 2023-07-24 21:11:08,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:08,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:08,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. after waiting 0 ms 2023-07-24 21:11:08,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:08,300 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/np1/table1/6e255690c5151703c1dc2584df251c1e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:08,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e. 2023-07-24 21:11:08,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6e255690c5151703c1dc2584df251c1e: 2023-07-24 21:11:08,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:08,302 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=6e255690c5151703c1dc2584df251c1e, regionState=CLOSED 2023-07-24 21:11:08,303 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233068302"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233068302"}]},"ts":"1690233068302"} 2023-07-24 21:11:08,305 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-24 21:11:08,305 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 6e255690c5151703c1dc2584df251c1e, server=jenkins-hbase4.apache.org,33693,1690233065825 in 161 msec 2023-07-24 21:11:08,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-24 21:11:08,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=6e255690c5151703c1dc2584df251c1e, UNASSIGN in 165 msec 2023-07-24 21:11:08,307 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233068307"}]},"ts":"1690233068307"} 2023-07-24 21:11:08,308 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-24 21:11:08,310 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-24 21:11:08,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 176 msec 2023-07-24 21:11:08,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 21:11:08,439 INFO [Listener at localhost/36605] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-24 21:11:08,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-24 21:11:08,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-24 21:11:08,442 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 21:11:08,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-24 21:11:08,443 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 21:11:08,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:08,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:11:08,446 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:08,448 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e/fam1, FileablePath, hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e/recovered.edits] 2023-07-24 21:11:08,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 21:11:08,453 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e/recovered.edits/4.seqid to hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/archive/data/np1/table1/6e255690c5151703c1dc2584df251c1e/recovered.edits/4.seqid 2023-07-24 21:11:08,454 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/.tmp/data/np1/table1/6e255690c5151703c1dc2584df251c1e 2023-07-24 21:11:08,454 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 21:11:08,456 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 21:11:08,457 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-24 21:11:08,459 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-24 21:11:08,460 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 21:11:08,460 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-24 21:11:08,460 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233068460"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:08,461 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 21:11:08,461 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6e255690c5151703c1dc2584df251c1e, NAME => 'np1:table1,,1690233067413.6e255690c5151703c1dc2584df251c1e.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 21:11:08,461 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-24 21:11:08,461 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233068461"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:08,462 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-24 21:11:08,466 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 21:11:08,467 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 27 msec 2023-07-24 21:11:08,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 21:11:08,550 INFO [Listener at localhost/36605] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-24 21:11:08,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-24 21:11:08,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-24 21:11:08,567 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 21:11:08,570 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 21:11:08,572 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 21:11:08,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 21:11:08,574 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-24 21:11:08,574 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:08,574 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 21:11:08,576 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 21:11:08,577 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 21 msec 2023-07-24 21:11:08,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36583] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 21:11:08,674 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 21:11:08,674 INFO [Listener at localhost/36605] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 21:11:08,674 DEBUG [Listener at localhost/36605] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f0cff46 to 127.0.0.1:53256 2023-07-24 21:11:08,674 DEBUG [Listener at localhost/36605] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:08,675 DEBUG [Listener at localhost/36605] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 21:11:08,675 DEBUG [Listener at localhost/36605] util.JVMClusterUtil(257): Found active master hash=641753266, stopped=false 2023-07-24 21:11:08,675 DEBUG [Listener at localhost/36605] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 21:11:08,675 DEBUG [Listener at localhost/36605] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 21:11:08,675 DEBUG [Listener at localhost/36605] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 21:11:08,675 INFO [Listener at localhost/36605] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:08,677 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:08,677 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:08,677 INFO [Listener at localhost/36605] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 21:11:08,677 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:08,677 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:08,677 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:08,677 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:08,679 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:08,679 DEBUG [Listener at localhost/36605] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1c3894ca to 127.0.0.1:53256 2023-07-24 21:11:08,679 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:08,679 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:08,679 DEBUG [Listener at localhost/36605] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:08,679 INFO [Listener at localhost/36605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33693,1690233065825' ***** 2023-07-24 21:11:08,679 INFO [Listener at localhost/36605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:08,680 INFO [Listener at localhost/36605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33655,1690233065872' ***** 2023-07-24 21:11:08,680 INFO [Listener at localhost/36605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:08,680 INFO [Listener at localhost/36605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39169,1690233065915' ***** 2023-07-24 21:11:08,680 INFO [Listener at localhost/36605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:08,680 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:08,680 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:08,680 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:08,689 INFO [RS:1;jenkins-hbase4:33655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@69b3dc86{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:08,689 INFO [RS:0;jenkins-hbase4:33693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@19da0d72{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:08,689 INFO [RS:2;jenkins-hbase4:39169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@471104d8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:08,690 INFO [RS:0;jenkins-hbase4:33693] server.AbstractConnector(383): Stopped ServerConnector@14ac6c96{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:08,690 INFO [RS:1;jenkins-hbase4:33655] server.AbstractConnector(383): Stopped ServerConnector@29d5e39{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:08,690 INFO [RS:2;jenkins-hbase4:39169] server.AbstractConnector(383): Stopped ServerConnector@692f1be5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:08,690 INFO [RS:1;jenkins-hbase4:33655] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:08,690 INFO [RS:0;jenkins-hbase4:33693] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:08,690 INFO [RS:2;jenkins-hbase4:39169] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:08,693 INFO [RS:1;jenkins-hbase4:33655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@d26bd46{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:08,693 INFO [RS:0;jenkins-hbase4:33693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b389078{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:08,693 INFO [RS:1;jenkins-hbase4:33655] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c64a668{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:08,693 INFO [RS:0;jenkins-hbase4:33693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@11ee97d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:08,693 INFO [RS:2;jenkins-hbase4:39169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5de4fb80{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:08,693 INFO [RS:2;jenkins-hbase4:39169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6defc860{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:08,694 INFO [RS:1;jenkins-hbase4:33655] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:08,694 INFO [RS:1;jenkins-hbase4:33655] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:08,694 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:08,694 INFO [RS:1;jenkins-hbase4:33655] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:08,694 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(3305): Received CLOSE for f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:08,695 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(3305): Received CLOSE for 635e2023d15573ead56e61286e0aa7a2 2023-07-24 21:11:08,695 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:08,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f3e8d4d6c573151fa50ba9c27c60ef3d, disabling compactions & flushes 2023-07-24 21:11:08,695 DEBUG [RS:1;jenkins-hbase4:33655] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x108c95bd to 127.0.0.1:53256 2023-07-24 21:11:08,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:08,695 INFO [RS:0;jenkins-hbase4:33693] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:08,695 INFO [RS:2;jenkins-hbase4:39169] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:08,696 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:08,696 INFO [RS:2;jenkins-hbase4:39169] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:08,696 INFO [RS:0;jenkins-hbase4:33693] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:08,696 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:08,696 INFO [RS:0;jenkins-hbase4:33693] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:08,697 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:08,697 DEBUG [RS:0;jenkins-hbase4:33693] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x017355dd to 127.0.0.1:53256 2023-07-24 21:11:08,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:08,696 DEBUG [RS:1;jenkins-hbase4:33655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:08,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. after waiting 0 ms 2023-07-24 21:11:08,697 DEBUG [RS:0;jenkins-hbase4:33693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:08,696 INFO [RS:2;jenkins-hbase4:39169] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:08,697 INFO [RS:0;jenkins-hbase4:33693] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:08,698 INFO [RS:0;jenkins-hbase4:33693] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:08,698 INFO [RS:0;jenkins-hbase4:33693] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:08,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:08,697 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 21:11:08,698 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 21:11:08,698 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(3305): Received CLOSE for 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:08,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f3e8d4d6c573151fa50ba9c27c60ef3d 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-24 21:11:08,698 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1478): Online Regions={f3e8d4d6c573151fa50ba9c27c60ef3d=hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d., 635e2023d15573ead56e61286e0aa7a2=hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2.} 2023-07-24 21:11:08,699 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 21:11:08,699 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1504): Waiting on 635e2023d15573ead56e61286e0aa7a2, f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:08,698 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:08,699 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-24 21:11:08,699 DEBUG [RS:2;jenkins-hbase4:39169] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7e40c91c to 127.0.0.1:53256 2023-07-24 21:11:08,699 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 21:11:08,699 DEBUG [RS:2;jenkins-hbase4:39169] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:08,699 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 21:11:08,699 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1478): Online Regions={158b9f54e8204dcebcb1991d735b0d7c=hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c.} 2023-07-24 21:11:08,699 DEBUG [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1504): Waiting on 158b9f54e8204dcebcb1991d735b0d7c 2023-07-24 21:11:08,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 158b9f54e8204dcebcb1991d735b0d7c, disabling compactions & flushes 2023-07-24 21:11:08,701 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:11:08,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:08,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:11:08,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:08,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:11:08,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. after waiting 0 ms 2023-07-24 21:11:08,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:11:08,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:08,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:11:08,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-24 21:11:08,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/quota/158b9f54e8204dcebcb1991d735b0d7c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:08,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:08,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 158b9f54e8204dcebcb1991d735b0d7c: 2023-07-24 21:11:08,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690233067169.158b9f54e8204dcebcb1991d735b0d7c. 2023-07-24 21:11:08,764 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:08,764 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:08,765 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:08,899 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1504): Waiting on 635e2023d15573ead56e61286e0aa7a2, f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:08,899 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 21:11:08,899 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39169,1690233065915; all regions closed. 2023-07-24 21:11:08,899 DEBUG [RS:2;jenkins-hbase4:39169] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 21:11:08,905 DEBUG [RS:2;jenkins-hbase4:39169] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs 2023-07-24 21:11:08,905 INFO [RS:2;jenkins-hbase4:39169] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39169%2C1690233065915:(num 1690233066607) 2023-07-24 21:11:08,905 DEBUG [RS:2;jenkins-hbase4:39169] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:08,905 INFO [RS:2;jenkins-hbase4:39169] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:08,905 INFO [RS:2;jenkins-hbase4:39169] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:08,905 INFO [RS:2;jenkins-hbase4:39169] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:08,905 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:08,905 INFO [RS:2;jenkins-hbase4:39169] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:08,906 INFO [RS:2;jenkins-hbase4:39169] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:08,907 INFO [RS:2;jenkins-hbase4:39169] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39169 2023-07-24 21:11:08,910 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:08,910 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:08,910 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:08,910 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:08,910 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:08,911 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39169,1690233065915 2023-07-24 21:11:08,911 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:08,911 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39169,1690233065915] 2023-07-24 21:11:08,911 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39169,1690233065915; numProcessing=1 2023-07-24 21:11:08,912 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39169,1690233065915 already deleted, retry=false 2023-07-24 21:11:08,912 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39169,1690233065915 expired; onlineServers=2 2023-07-24 21:11:09,099 DEBUG [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1504): Waiting on 635e2023d15573ead56e61286e0aa7a2, f3e8d4d6c573151fa50ba9c27c60ef3d 2023-07-24 21:11:09,099 DEBUG [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 21:11:09,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/.tmp/m/931e35eeb3554f9e8ce02af1c598b397 2023-07-24 21:11:09,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/.tmp/info/992553cd983743bcb599a4962ead2c3f 2023-07-24 21:11:09,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/.tmp/m/931e35eeb3554f9e8ce02af1c598b397 as hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/m/931e35eeb3554f9e8ce02af1c598b397 2023-07-24 21:11:09,129 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 992553cd983743bcb599a4962ead2c3f 2023-07-24 21:11:09,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/m/931e35eeb3554f9e8ce02af1c598b397, entries=1, sequenceid=7, filesize=4.9 K 2023-07-24 21:11:09,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for f3e8d4d6c573151fa50ba9c27c60ef3d in 437ms, sequenceid=7, compaction requested=false 2023-07-24 21:11:09,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 21:11:09,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/rsgroup/f3e8d4d6c573151fa50ba9c27c60ef3d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-24 21:11:09,142 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/.tmp/rep_barrier/2518fafd2ffd4f3e80cc570d5ed8a7de 2023-07-24 21:11:09,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:09,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:09,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f3e8d4d6c573151fa50ba9c27c60ef3d: 2023-07-24 21:11:09,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690233066799.f3e8d4d6c573151fa50ba9c27c60ef3d. 2023-07-24 21:11:09,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 635e2023d15573ead56e61286e0aa7a2, disabling compactions & flushes 2023-07-24 21:11:09,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:09,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:09,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. after waiting 0 ms 2023-07-24 21:11:09,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:09,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 635e2023d15573ead56e61286e0aa7a2 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-24 21:11:09,147 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2518fafd2ffd4f3e80cc570d5ed8a7de 2023-07-24 21:11:09,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/.tmp/info/d3f9fab629e84ccf8667eb5bd22a46af 2023-07-24 21:11:09,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d3f9fab629e84ccf8667eb5bd22a46af 2023-07-24 21:11:09,165 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/.tmp/table/0b0d92ead7a04a518378dd1792b53ee4 2023-07-24 21:11:09,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/.tmp/info/d3f9fab629e84ccf8667eb5bd22a46af as hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/info/d3f9fab629e84ccf8667eb5bd22a46af 2023-07-24 21:11:09,171 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b0d92ead7a04a518378dd1792b53ee4 2023-07-24 21:11:09,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d3f9fab629e84ccf8667eb5bd22a46af 2023-07-24 21:11:09,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/info/d3f9fab629e84ccf8667eb5bd22a46af, entries=3, sequenceid=8, filesize=5.0 K 2023-07-24 21:11:09,172 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/.tmp/info/992553cd983743bcb599a4962ead2c3f as hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/info/992553cd983743bcb599a4962ead2c3f 2023-07-24 21:11:09,173 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 635e2023d15573ead56e61286e0aa7a2 in 30ms, sequenceid=8, compaction requested=false 2023-07-24 21:11:09,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 21:11:09,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/namespace/635e2023d15573ead56e61286e0aa7a2/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-24 21:11:09,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:09,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 635e2023d15573ead56e61286e0aa7a2: 2023-07-24 21:11:09,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690233066806.635e2023d15573ead56e61286e0aa7a2. 2023-07-24 21:11:09,181 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 992553cd983743bcb599a4962ead2c3f 2023-07-24 21:11:09,181 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/info/992553cd983743bcb599a4962ead2c3f, entries=32, sequenceid=31, filesize=8.5 K 2023-07-24 21:11:09,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/.tmp/rep_barrier/2518fafd2ffd4f3e80cc570d5ed8a7de as hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/rep_barrier/2518fafd2ffd4f3e80cc570d5ed8a7de 2023-07-24 21:11:09,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2518fafd2ffd4f3e80cc570d5ed8a7de 2023-07-24 21:11:09,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/rep_barrier/2518fafd2ffd4f3e80cc570d5ed8a7de, entries=1, sequenceid=31, filesize=4.9 K 2023-07-24 21:11:09,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/.tmp/table/0b0d92ead7a04a518378dd1792b53ee4 as hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/table/0b0d92ead7a04a518378dd1792b53ee4 2023-07-24 21:11:09,196 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b0d92ead7a04a518378dd1792b53ee4 2023-07-24 21:11:09,196 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/table/0b0d92ead7a04a518378dd1792b53ee4, entries=8, sequenceid=31, filesize=5.2 K 2023-07-24 21:11:09,197 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 495ms, sequenceid=31, compaction requested=false 2023-07-24 21:11:09,197 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 21:11:09,207 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-24 21:11:09,208 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:09,208 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:09,208 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:11:09,208 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:09,277 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,277 INFO [RS:2;jenkins-hbase4:39169] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39169,1690233065915; zookeeper connection closed. 2023-07-24 21:11:09,277 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:39169-0x101992c540a0003, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,278 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@419fa844] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@419fa844 2023-07-24 21:11:09,299 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33655,1690233065872; all regions closed. 2023-07-24 21:11:09,299 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33693,1690233065825; all regions closed. 2023-07-24 21:11:09,299 DEBUG [RS:0;jenkins-hbase4:33693] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 21:11:09,299 DEBUG [RS:1;jenkins-hbase4:33655] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 21:11:09,308 DEBUG [RS:1;jenkins-hbase4:33655] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs 2023-07-24 21:11:09,308 INFO [RS:1;jenkins-hbase4:33655] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33655%2C1690233065872:(num 1690233066607) 2023-07-24 21:11:09,308 DEBUG [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs 2023-07-24 21:11:09,308 DEBUG [RS:1;jenkins-hbase4:33655] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:09,309 INFO [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33693%2C1690233065825.meta:.meta(num 1690233066743) 2023-07-24 21:11:09,309 INFO [RS:1;jenkins-hbase4:33655] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:09,309 INFO [RS:1;jenkins-hbase4:33655] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:09,309 INFO [RS:1;jenkins-hbase4:33655] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:09,309 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:09,309 INFO [RS:1;jenkins-hbase4:33655] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:09,309 INFO [RS:1;jenkins-hbase4:33655] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:09,310 INFO [RS:1;jenkins-hbase4:33655] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33655 2023-07-24 21:11:09,315 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:09,315 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:09,315 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33655,1690233065872 2023-07-24 21:11:09,317 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33655,1690233065872] 2023-07-24 21:11:09,317 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33655,1690233065872; numProcessing=2 2023-07-24 21:11:09,317 DEBUG [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/oldWALs 2023-07-24 21:11:09,317 INFO [RS:0;jenkins-hbase4:33693] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33693%2C1690233065825:(num 1690233066601) 2023-07-24 21:11:09,317 DEBUG [RS:0;jenkins-hbase4:33693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:09,317 INFO [RS:0;jenkins-hbase4:33693] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:09,317 INFO [RS:0;jenkins-hbase4:33693] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:09,317 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:09,318 INFO [RS:0;jenkins-hbase4:33693] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33693 2023-07-24 21:11:09,320 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33655,1690233065872 already deleted, retry=false 2023-07-24 21:11:09,320 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33655,1690233065872 expired; onlineServers=1 2023-07-24 21:11:09,321 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33693,1690233065825 2023-07-24 21:11:09,321 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:09,323 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33693,1690233065825] 2023-07-24 21:11:09,323 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33693,1690233065825; numProcessing=3 2023-07-24 21:11:09,324 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33693,1690233065825 already deleted, retry=false 2023-07-24 21:11:09,324 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33693,1690233065825 expired; onlineServers=0 2023-07-24 21:11:09,324 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36583,1690233065756' ***** 2023-07-24 21:11:09,324 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 21:11:09,324 DEBUG [M:0;jenkins-hbase4:36583] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@227fe0a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:09,325 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:09,326 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:09,326 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:09,326 INFO [M:0;jenkins-hbase4:36583] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@56a25f8{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 21:11:09,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:09,326 INFO [M:0;jenkins-hbase4:36583] server.AbstractConnector(383): Stopped ServerConnector@61d6c91f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:09,327 INFO [M:0;jenkins-hbase4:36583] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:09,327 INFO [M:0;jenkins-hbase4:36583] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@19ed0121{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:09,327 INFO [M:0;jenkins-hbase4:36583] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5823e00a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:09,327 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36583,1690233065756 2023-07-24 21:11:09,327 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36583,1690233065756; all regions closed. 2023-07-24 21:11:09,327 DEBUG [M:0;jenkins-hbase4:36583] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:09,327 INFO [M:0;jenkins-hbase4:36583] master.HMaster(1491): Stopping master jetty server 2023-07-24 21:11:09,328 INFO [M:0;jenkins-hbase4:36583] server.AbstractConnector(383): Stopped ServerConnector@64df5e88{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:09,328 DEBUG [M:0;jenkins-hbase4:36583] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 21:11:09,329 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 21:11:09,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233066206] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233066206,5,FailOnTimeoutGroup] 2023-07-24 21:11:09,329 DEBUG [M:0;jenkins-hbase4:36583] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 21:11:09,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233066206] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233066206,5,FailOnTimeoutGroup] 2023-07-24 21:11:09,329 INFO [M:0;jenkins-hbase4:36583] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 21:11:09,330 INFO [M:0;jenkins-hbase4:36583] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 21:11:09,330 INFO [M:0;jenkins-hbase4:36583] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:09,330 DEBUG [M:0;jenkins-hbase4:36583] master.HMaster(1512): Stopping service threads 2023-07-24 21:11:09,330 INFO [M:0;jenkins-hbase4:36583] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 21:11:09,331 ERROR [M:0;jenkins-hbase4:36583] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 21:11:09,331 INFO [M:0;jenkins-hbase4:36583] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 21:11:09,331 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 21:11:09,331 DEBUG [M:0;jenkins-hbase4:36583] zookeeper.ZKUtil(398): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 21:11:09,331 WARN [M:0;jenkins-hbase4:36583] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 21:11:09,331 INFO [M:0;jenkins-hbase4:36583] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 21:11:09,332 INFO [M:0;jenkins-hbase4:36583] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 21:11:09,332 DEBUG [M:0;jenkins-hbase4:36583] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 21:11:09,332 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:09,332 DEBUG [M:0;jenkins-hbase4:36583] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:09,332 DEBUG [M:0;jenkins-hbase4:36583] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 21:11:09,332 DEBUG [M:0;jenkins-hbase4:36583] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:09,332 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.15 KB 2023-07-24 21:11:09,344 INFO [M:0;jenkins-hbase4:36583] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/22d31431510345cdb01d7beaecc698ee 2023-07-24 21:11:09,349 DEBUG [M:0;jenkins-hbase4:36583] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/22d31431510345cdb01d7beaecc698ee as hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/22d31431510345cdb01d7beaecc698ee 2023-07-24 21:11:09,356 INFO [M:0;jenkins-hbase4:36583] regionserver.HStore(1080): Added hdfs://localhost:41467/user/jenkins/test-data/bf6e1ee4-de85-6181-80e5-1b29689d1640/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/22d31431510345cdb01d7beaecc698ee, entries=24, sequenceid=194, filesize=12.4 K 2023-07-24 21:11:09,356 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95220, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-24 21:11:09,358 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:09,358 DEBUG [M:0;jenkins-hbase4:36583] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:09,363 INFO [M:0;jenkins-hbase4:36583] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 21:11:09,363 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:09,364 INFO [M:0;jenkins-hbase4:36583] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36583 2023-07-24 21:11:09,365 DEBUG [M:0;jenkins-hbase4:36583] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36583,1690233065756 already deleted, retry=false 2023-07-24 21:11:09,417 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,417 INFO [RS:1;jenkins-hbase4:33655] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33655,1690233065872; zookeeper connection closed. 2023-07-24 21:11:09,417 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33655-0x101992c540a0002, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,419 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1c763be6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1c763be6 2023-07-24 21:11:09,517 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,517 INFO [M:0;jenkins-hbase4:36583] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36583,1690233065756; zookeeper connection closed. 2023-07-24 21:11:09,517 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): master:36583-0x101992c540a0000, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,617 INFO [RS:0;jenkins-hbase4:33693] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33693,1690233065825; zookeeper connection closed. 2023-07-24 21:11:09,617 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,617 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): regionserver:33693-0x101992c540a0001, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:09,618 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4320bab1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4320bab1 2023-07-24 21:11:09,618 INFO [Listener at localhost/36605] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 21:11:09,618 WARN [Listener at localhost/36605] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:09,624 INFO [Listener at localhost/36605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:09,630 WARN [BP-892563108-172.31.14.131-1690233064755 heartbeating to localhost/127.0.0.1:41467] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:09,632 WARN [BP-892563108-172.31.14.131-1690233064755 heartbeating to localhost/127.0.0.1:41467] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-892563108-172.31.14.131-1690233064755 (Datanode Uuid 12303e03-e7ef-4b95-a514-4f1ebd9a6ff4) service to localhost/127.0.0.1:41467 2023-07-24 21:11:09,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/dfs/data/data6/current/BP-892563108-172.31.14.131-1690233064755] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:09,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/dfs/data/data5/current/BP-892563108-172.31.14.131-1690233064755] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:09,637 WARN [Listener at localhost/36605] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:09,646 INFO [Listener at localhost/36605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:09,751 WARN [BP-892563108-172.31.14.131-1690233064755 heartbeating to localhost/127.0.0.1:41467] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:09,751 WARN [BP-892563108-172.31.14.131-1690233064755 heartbeating to localhost/127.0.0.1:41467] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-892563108-172.31.14.131-1690233064755 (Datanode Uuid 2a2ab15a-4ccd-4591-969a-44ab806e3422) service to localhost/127.0.0.1:41467 2023-07-24 21:11:09,753 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/dfs/data/data3/current/BP-892563108-172.31.14.131-1690233064755] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:09,753 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/dfs/data/data4/current/BP-892563108-172.31.14.131-1690233064755] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:09,754 WARN [Listener at localhost/36605] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:09,757 INFO [Listener at localhost/36605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:09,862 WARN [BP-892563108-172.31.14.131-1690233064755 heartbeating to localhost/127.0.0.1:41467] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:09,862 WARN [BP-892563108-172.31.14.131-1690233064755 heartbeating to localhost/127.0.0.1:41467] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-892563108-172.31.14.131-1690233064755 (Datanode Uuid dc0523c6-7882-482b-bfa9-1378addf4a9c) service to localhost/127.0.0.1:41467 2023-07-24 21:11:09,865 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/dfs/data/data1/current/BP-892563108-172.31.14.131-1690233064755] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:09,865 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/cluster_ddc68db4-4a22-dc14-d567-2ea22ad4e0b7/dfs/data/data2/current/BP-892563108-172.31.14.131-1690233064755] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:09,873 INFO [Listener at localhost/36605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:09,988 INFO [Listener at localhost/36605] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 21:11:10,015 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 21:11:10,015 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 21:11:10,015 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.log.dir so I do NOT create it in target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f66b2b3-d615-1bf7-e22d-bb0772221c46/hadoop.tmp.dir so I do NOT create it in target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0, deleteOnExit=true 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/test.cache.data in system properties and HBase conf 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir in system properties and HBase conf 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 21:11:10,016 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 21:11:10,016 DEBUG [Listener at localhost/36605] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 21:11:10,017 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 21:11:10,018 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 21:11:10,018 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/nfs.dump.dir in system properties and HBase conf 2023-07-24 21:11:10,018 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/java.io.tmpdir in system properties and HBase conf 2023-07-24 21:11:10,018 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 21:11:10,018 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 21:11:10,018 INFO [Listener at localhost/36605] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 21:11:10,022 WARN [Listener at localhost/36605] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 21:11:10,022 WARN [Listener at localhost/36605] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 21:11:10,062 WARN [Listener at localhost/36605] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:10,064 INFO [Listener at localhost/36605] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:10,070 INFO [Listener at localhost/36605] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/java.io.tmpdir/Jetty_localhost_40809_hdfs____yzve9b/webapp 2023-07-24 21:11:10,087 DEBUG [Listener at localhost/36605-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101992c540a000a, quorum=127.0.0.1:53256, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 21:11:10,087 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101992c540a000a, quorum=127.0.0.1:53256, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 21:11:10,170 INFO [Listener at localhost/36605] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40809 2023-07-24 21:11:10,174 WARN [Listener at localhost/36605] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 21:11:10,174 WARN [Listener at localhost/36605] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 21:11:10,215 WARN [Listener at localhost/46175] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:10,231 WARN [Listener at localhost/46175] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:11:10,233 WARN [Listener at localhost/46175] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:10,234 INFO [Listener at localhost/46175] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:10,240 INFO [Listener at localhost/46175] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/java.io.tmpdir/Jetty_localhost_45143_datanode____ivv1km/webapp 2023-07-24 21:11:10,344 INFO [Listener at localhost/46175] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45143 2023-07-24 21:11:10,354 WARN [Listener at localhost/44105] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:10,402 WARN [Listener at localhost/44105] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:11:10,406 WARN [Listener at localhost/44105] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:10,408 INFO [Listener at localhost/44105] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:10,414 INFO [Listener at localhost/44105] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/java.io.tmpdir/Jetty_localhost_45205_datanode____.3h2q9v/webapp 2023-07-24 21:11:10,495 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88a5ce0bb7cab909: Processing first storage report for DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0 from datanode f549d505-3391-4810-b83d-a45c151e323a 2023-07-24 21:11:10,495 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88a5ce0bb7cab909: from storage DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0 node DatanodeRegistration(127.0.0.1:34447, datanodeUuid=f549d505-3391-4810-b83d-a45c151e323a, infoPort=39865, infoSecurePort=0, ipcPort=44105, storageInfo=lv=-57;cid=testClusterID;nsid=601509330;c=1690233070025), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:10,495 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88a5ce0bb7cab909: Processing first storage report for DS-56369a65-05fe-42c8-84a3-097b49fe7af3 from datanode f549d505-3391-4810-b83d-a45c151e323a 2023-07-24 21:11:10,495 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88a5ce0bb7cab909: from storage DS-56369a65-05fe-42c8-84a3-097b49fe7af3 node DatanodeRegistration(127.0.0.1:34447, datanodeUuid=f549d505-3391-4810-b83d-a45c151e323a, infoPort=39865, infoSecurePort=0, ipcPort=44105, storageInfo=lv=-57;cid=testClusterID;nsid=601509330;c=1690233070025), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:10,524 INFO [Listener at localhost/44105] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45205 2023-07-24 21:11:10,531 WARN [Listener at localhost/36535] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:10,541 WARN [Listener at localhost/36535] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 21:11:10,542 WARN [Listener at localhost/36535] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 21:11:10,544 INFO [Listener at localhost/36535] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 21:11:10,546 INFO [Listener at localhost/36535] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/java.io.tmpdir/Jetty_localhost_34157_datanode____.mtn0jr/webapp 2023-07-24 21:11:10,616 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa8535bf372a225c0: Processing first storage report for DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7 from datanode c99bb084-2850-46a2-af12-d408e98a2a6e 2023-07-24 21:11:10,616 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa8535bf372a225c0: from storage DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7 node DatanodeRegistration(127.0.0.1:39239, datanodeUuid=c99bb084-2850-46a2-af12-d408e98a2a6e, infoPort=36195, infoSecurePort=0, ipcPort=36535, storageInfo=lv=-57;cid=testClusterID;nsid=601509330;c=1690233070025), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:10,616 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa8535bf372a225c0: Processing first storage report for DS-fc77ff73-9278-483d-9b77-7095c41b5bf4 from datanode c99bb084-2850-46a2-af12-d408e98a2a6e 2023-07-24 21:11:10,616 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa8535bf372a225c0: from storage DS-fc77ff73-9278-483d-9b77-7095c41b5bf4 node DatanodeRegistration(127.0.0.1:39239, datanodeUuid=c99bb084-2850-46a2-af12-d408e98a2a6e, infoPort=36195, infoSecurePort=0, ipcPort=36535, storageInfo=lv=-57;cid=testClusterID;nsid=601509330;c=1690233070025), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:10,642 INFO [Listener at localhost/36535] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34157 2023-07-24 21:11:10,649 WARN [Listener at localhost/41541] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 21:11:10,739 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe35bc793ec9a082b: Processing first storage report for DS-5141bebe-3b3c-4eb6-8110-d4665d5de470 from datanode fe67c5e2-f63f-46b9-aaac-5bb077c7a390 2023-07-24 21:11:10,739 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe35bc793ec9a082b: from storage DS-5141bebe-3b3c-4eb6-8110-d4665d5de470 node DatanodeRegistration(127.0.0.1:45113, datanodeUuid=fe67c5e2-f63f-46b9-aaac-5bb077c7a390, infoPort=40991, infoSecurePort=0, ipcPort=41541, storageInfo=lv=-57;cid=testClusterID;nsid=601509330;c=1690233070025), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:10,739 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe35bc793ec9a082b: Processing first storage report for DS-b4b8efa3-d992-44ab-9c85-4798439ebbad from datanode fe67c5e2-f63f-46b9-aaac-5bb077c7a390 2023-07-24 21:11:10,739 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe35bc793ec9a082b: from storage DS-b4b8efa3-d992-44ab-9c85-4798439ebbad node DatanodeRegistration(127.0.0.1:45113, datanodeUuid=fe67c5e2-f63f-46b9-aaac-5bb077c7a390, infoPort=40991, infoSecurePort=0, ipcPort=41541, storageInfo=lv=-57;cid=testClusterID;nsid=601509330;c=1690233070025), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 21:11:10,757 DEBUG [Listener at localhost/41541] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa 2023-07-24 21:11:10,758 INFO [Listener at localhost/41541] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/zookeeper_0, clientPort=53183, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 21:11:10,759 INFO [Listener at localhost/41541] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53183 2023-07-24 21:11:10,760 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,760 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,774 INFO [Listener at localhost/41541] util.FSUtils(471): Created version file at hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40 with version=8 2023-07-24 21:11:10,774 INFO [Listener at localhost/41541] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44343/user/jenkins/test-data/d8f334f4-fc39-ae4b-6e89-5b620d6fd4f7/hbase-staging 2023-07-24 21:11:10,775 DEBUG [Listener at localhost/41541] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 21:11:10,775 DEBUG [Listener at localhost/41541] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 21:11:10,775 DEBUG [Listener at localhost/41541] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 21:11:10,775 DEBUG [Listener at localhost/41541] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:10,776 INFO [Listener at localhost/41541] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:10,777 INFO [Listener at localhost/41541] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40875 2023-07-24 21:11:10,777 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,778 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,779 INFO [Listener at localhost/41541] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40875 connecting to ZooKeeper ensemble=127.0.0.1:53183 2023-07-24 21:11:10,785 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:408750x0, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:10,786 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40875-0x101992c67a90000 connected 2023-07-24 21:11:10,800 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:10,800 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:10,800 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:10,801 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40875 2023-07-24 21:11:10,801 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40875 2023-07-24 21:11:10,801 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40875 2023-07-24 21:11:10,801 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40875 2023-07-24 21:11:10,801 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40875 2023-07-24 21:11:10,803 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:10,803 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:10,803 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:10,804 INFO [Listener at localhost/41541] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 21:11:10,804 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:10,804 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:10,804 INFO [Listener at localhost/41541] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:10,805 INFO [Listener at localhost/41541] http.HttpServer(1146): Jetty bound to port 44127 2023-07-24 21:11:10,805 INFO [Listener at localhost/41541] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:10,807 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,807 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c092624{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:10,807 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,807 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e374ea1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:10,812 INFO [Listener at localhost/41541] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:10,813 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:10,814 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:10,814 INFO [Listener at localhost/41541] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:10,815 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,816 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7d884c2{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 21:11:10,817 INFO [Listener at localhost/41541] server.AbstractConnector(333): Started ServerConnector@74a4f3c2{HTTP/1.1, (http/1.1)}{0.0.0.0:44127} 2023-07-24 21:11:10,817 INFO [Listener at localhost/41541] server.Server(415): Started @41463ms 2023-07-24 21:11:10,817 INFO [Listener at localhost/41541] master.HMaster(444): hbase.rootdir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40, hbase.cluster.distributed=false 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:10,831 INFO [Listener at localhost/41541] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:10,832 INFO [Listener at localhost/41541] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35235 2023-07-24 21:11:10,832 INFO [Listener at localhost/41541] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:10,834 DEBUG [Listener at localhost/41541] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:10,835 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,836 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,837 INFO [Listener at localhost/41541] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35235 connecting to ZooKeeper ensemble=127.0.0.1:53183 2023-07-24 21:11:10,840 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:352350x0, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:10,841 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35235-0x101992c67a90001 connected 2023-07-24 21:11:10,841 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:10,842 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:10,842 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:10,843 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35235 2023-07-24 21:11:10,843 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35235 2023-07-24 21:11:10,843 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35235 2023-07-24 21:11:10,844 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35235 2023-07-24 21:11:10,844 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35235 2023-07-24 21:11:10,845 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:10,845 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:10,846 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:10,846 INFO [Listener at localhost/41541] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:10,846 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:10,846 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:10,846 INFO [Listener at localhost/41541] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:10,847 INFO [Listener at localhost/41541] http.HttpServer(1146): Jetty bound to port 36369 2023-07-24 21:11:10,847 INFO [Listener at localhost/41541] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:10,859 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,859 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40a15cc5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:10,860 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,860 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ee9182{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:10,865 INFO [Listener at localhost/41541] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:10,865 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:10,866 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:10,866 INFO [Listener at localhost/41541] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:10,867 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,868 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@338e392b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:10,869 INFO [Listener at localhost/41541] server.AbstractConnector(333): Started ServerConnector@716c1dd1{HTTP/1.1, (http/1.1)}{0.0.0.0:36369} 2023-07-24 21:11:10,869 INFO [Listener at localhost/41541] server.Server(415): Started @41515ms 2023-07-24 21:11:10,883 INFO [Listener at localhost/41541] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:10,883 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,883 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,883 INFO [Listener at localhost/41541] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:10,883 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,884 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:10,884 INFO [Listener at localhost/41541] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:10,884 INFO [Listener at localhost/41541] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46505 2023-07-24 21:11:10,885 INFO [Listener at localhost/41541] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:10,886 DEBUG [Listener at localhost/41541] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:10,887 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,888 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,889 INFO [Listener at localhost/41541] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46505 connecting to ZooKeeper ensemble=127.0.0.1:53183 2023-07-24 21:11:10,894 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:465050x0, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:10,894 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:465050x0, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:10,894 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:465050x0, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:10,896 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:465050x0, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:10,896 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46505-0x101992c67a90002 connected 2023-07-24 21:11:10,898 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46505 2023-07-24 21:11:10,899 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46505 2023-07-24 21:11:10,899 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46505 2023-07-24 21:11:10,901 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46505 2023-07-24 21:11:10,902 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46505 2023-07-24 21:11:10,904 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:10,904 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:10,904 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:10,905 INFO [Listener at localhost/41541] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:10,905 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:10,905 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:10,905 INFO [Listener at localhost/41541] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:10,906 INFO [Listener at localhost/41541] http.HttpServer(1146): Jetty bound to port 35295 2023-07-24 21:11:10,906 INFO [Listener at localhost/41541] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:10,911 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,911 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@bb562ac{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:10,911 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,911 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e0fd42b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:10,915 INFO [Listener at localhost/41541] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:10,916 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:10,916 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:10,916 INFO [Listener at localhost/41541] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:10,917 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,917 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7d780e1a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:10,919 INFO [Listener at localhost/41541] server.AbstractConnector(333): Started ServerConnector@4e9584e1{HTTP/1.1, (http/1.1)}{0.0.0.0:35295} 2023-07-24 21:11:10,919 INFO [Listener at localhost/41541] server.Server(415): Started @41566ms 2023-07-24 21:11:10,930 INFO [Listener at localhost/41541] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:10,930 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,931 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,931 INFO [Listener at localhost/41541] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:10,931 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:10,931 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:10,931 INFO [Listener at localhost/41541] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:10,931 INFO [Listener at localhost/41541] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40989 2023-07-24 21:11:10,932 INFO [Listener at localhost/41541] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:10,934 DEBUG [Listener at localhost/41541] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:10,934 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,936 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,937 INFO [Listener at localhost/41541] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40989 connecting to ZooKeeper ensemble=127.0.0.1:53183 2023-07-24 21:11:10,941 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:409890x0, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:10,942 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:409890x0, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:10,942 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40989-0x101992c67a90003 connected 2023-07-24 21:11:10,943 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:10,943 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:10,943 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40989 2023-07-24 21:11:10,944 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40989 2023-07-24 21:11:10,944 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40989 2023-07-24 21:11:10,944 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40989 2023-07-24 21:11:10,944 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40989 2023-07-24 21:11:10,946 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:10,946 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:10,946 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:10,946 INFO [Listener at localhost/41541] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:10,946 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:10,946 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:10,947 INFO [Listener at localhost/41541] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:10,947 INFO [Listener at localhost/41541] http.HttpServer(1146): Jetty bound to port 40667 2023-07-24 21:11:10,947 INFO [Listener at localhost/41541] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:10,948 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,948 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@246b4380{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:10,949 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,949 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c727e91{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:10,953 INFO [Listener at localhost/41541] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:10,954 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:10,954 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:10,954 INFO [Listener at localhost/41541] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 21:11:10,955 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:10,955 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@76d07aad{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:10,956 INFO [Listener at localhost/41541] server.AbstractConnector(333): Started ServerConnector@727fe86{HTTP/1.1, (http/1.1)}{0.0.0.0:40667} 2023-07-24 21:11:10,957 INFO [Listener at localhost/41541] server.Server(415): Started @41603ms 2023-07-24 21:11:10,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:10,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3fa19534{HTTP/1.1, (http/1.1)}{0.0.0.0:40333} 2023-07-24 21:11:10,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41607ms 2023-07-24 21:11:10,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:10,963 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 21:11:10,963 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:10,964 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:10,964 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:10,964 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:10,964 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:10,964 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:10,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:11:10,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40875,1690233070775 from backup master directory 2023-07-24 21:11:10,967 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:11:10,968 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:10,968 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:10,968 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 21:11:10,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:10,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/hbase.id with ID: 8e6fe8ff-ad81-422a-9d1f-2addc6588c39 2023-07-24 21:11:10,990 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:10,993 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:11,002 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x71e7e4d9 to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:11,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4141ba1b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:11,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:11,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 21:11:11,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:11,010 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store-tmp 2023-07-24 21:11:11,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 21:11:11,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:11,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:11,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 21:11:11,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:11,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:11,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:11,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/WALs/jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:11,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40875%2C1690233070775, suffix=, logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/WALs/jenkins-hbase4.apache.org,40875,1690233070775, archiveDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/oldWALs, maxLogs=10 2023-07-24 21:11:11,047 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK] 2023-07-24 21:11:11,048 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK] 2023-07-24 21:11:11,048 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK] 2023-07-24 21:11:11,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/WALs/jenkins-hbase4.apache.org,40875,1690233070775/jenkins-hbase4.apache.org%2C40875%2C1690233070775.1690233071031 2023-07-24 21:11:11,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK], DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK], DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK]] 2023-07-24 21:11:11,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:11,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:11,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:11,053 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:11,054 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 21:11:11,055 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 21:11:11,055 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:11,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:11,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 21:11:11,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:11,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10018966400, jitterRate=-0.06691104173660278}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:11,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:11,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 21:11:11,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 21:11:11,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 21:11:11,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 21:11:11,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 21:11:11,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 21:11:11,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 21:11:11,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 21:11:11,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 21:11:11,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 21:11:11,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 21:11:11,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 21:11:11,067 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:11,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 21:11:11,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 21:11:11,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 21:11:11,069 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:11,069 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:11,069 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:11,069 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:11,069 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:11,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40875,1690233070775, sessionid=0x101992c67a90000, setting cluster-up flag (Was=false) 2023-07-24 21:11:11,074 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:11,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 21:11:11,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:11,081 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:11,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 21:11:11,086 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:11,086 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.hbase-snapshot/.tmp 2023-07-24 21:11:11,087 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 21:11:11,087 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 21:11:11,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 21:11:11,088 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:11,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 21:11:11,089 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 21:11:11,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 21:11:11,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 21:11:11,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 21:11:11,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:11,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690233101101 2023-07-24 21:11:11,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 21:11:11,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 21:11:11,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 21:11:11,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 21:11:11,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 21:11:11,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 21:11:11,102 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 21:11:11,102 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 21:11:11,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 21:11:11,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 21:11:11,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 21:11:11,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 21:11:11,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 21:11:11,103 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233071103,5,FailOnTimeoutGroup] 2023-07-24 21:11:11,103 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233071103,5,FailOnTimeoutGroup] 2023-07-24 21:11:11,103 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:11,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 21:11:11,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,117 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:11,117 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:11,117 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40 2023-07-24 21:11:11,127 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:11:11,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/info 2023-07-24 21:11:11,130 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:11:11,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:11:11,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:11:11,132 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:11:11,132 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,132 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:11:11,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/table 2023-07-24 21:11:11,134 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:11:11,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,135 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740 2023-07-24 21:11:11,135 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740 2023-07-24 21:11:11,137 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:11:11,138 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:11:11,140 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:11,140 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11670082080, jitterRate=0.08686108887195587}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:11:11,140 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:11:11,140 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:11:11,140 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:11:11,140 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:11:11,140 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:11:11,140 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:11:11,140 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:11,141 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:11:11,141 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 21:11:11,141 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 21:11:11,141 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 21:11:11,142 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 21:11:11,144 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 21:11:11,159 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(951): ClusterId : 8e6fe8ff-ad81-422a-9d1f-2addc6588c39 2023-07-24 21:11:11,159 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(951): ClusterId : 8e6fe8ff-ad81-422a-9d1f-2addc6588c39 2023-07-24 21:11:11,159 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:11,159 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(951): ClusterId : 8e6fe8ff-ad81-422a-9d1f-2addc6588c39 2023-07-24 21:11:11,159 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:11,159 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:11,162 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:11,162 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:11,162 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:11,162 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:11,162 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:11,162 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:11,165 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:11,166 DEBUG [RS:0;jenkins-hbase4:35235] zookeeper.ReadOnlyZKClient(139): Connect 0x67397cdc to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:11,166 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:11,166 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:11,169 DEBUG [RS:1;jenkins-hbase4:46505] zookeeper.ReadOnlyZKClient(139): Connect 0x75c8943a to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:11,169 DEBUG [RS:2;jenkins-hbase4:40989] zookeeper.ReadOnlyZKClient(139): Connect 0x05131c58 to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:11,177 DEBUG [RS:0;jenkins-hbase4:35235] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69fe9e4b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:11,177 DEBUG [RS:0;jenkins-hbase4:35235] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6dc117ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:11,180 DEBUG [RS:1;jenkins-hbase4:46505] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c6a5e6b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:11,180 DEBUG [RS:1;jenkins-hbase4:46505] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4477c671, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:11,180 DEBUG [RS:2;jenkins-hbase4:40989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4011829e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:11,180 DEBUG [RS:2;jenkins-hbase4:40989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1417ff83, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:11,186 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35235 2023-07-24 21:11:11,186 INFO [RS:0;jenkins-hbase4:35235] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:11,186 INFO [RS:0;jenkins-hbase4:35235] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:11,186 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:11,186 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40875,1690233070775 with isa=jenkins-hbase4.apache.org/172.31.14.131:35235, startcode=1690233070830 2023-07-24 21:11:11,186 DEBUG [RS:0;jenkins-hbase4:35235] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:11,188 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46505 2023-07-24 21:11:11,188 INFO [RS:1;jenkins-hbase4:46505] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:11,188 INFO [RS:1;jenkins-hbase4:46505] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:11,188 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:11,188 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40989 2023-07-24 21:11:11,188 INFO [RS:2;jenkins-hbase4:40989] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:11,188 INFO [RS:2;jenkins-hbase4:40989] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:11,188 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:11,188 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37135, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:11,188 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40875,1690233070775 with isa=jenkins-hbase4.apache.org/172.31.14.131:46505, startcode=1690233070882 2023-07-24 21:11:11,188 DEBUG [RS:1;jenkins-hbase4:46505] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:11,189 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40875,1690233070775 with isa=jenkins-hbase4.apache.org/172.31.14.131:40989, startcode=1690233070930 2023-07-24 21:11:11,190 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40875] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,190 DEBUG [RS:2;jenkins-hbase4:40989] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:11,190 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:11,191 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 21:11:11,191 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40 2023-07-24 21:11:11,191 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46175 2023-07-24 21:11:11,191 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44127 2023-07-24 21:11:11,193 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:11,193 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57433, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:11,193 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40875] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,193 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:11,193 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 21:11:11,193 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40875] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,193 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:11,194 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40 2023-07-24 21:11:11,194 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 21:11:11,194 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46175 2023-07-24 21:11:11,194 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44127 2023-07-24 21:11:11,194 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40 2023-07-24 21:11:11,194 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46175 2023-07-24 21:11:11,194 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44127 2023-07-24 21:11:11,196 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:11,204 DEBUG [RS:0;jenkins-hbase4:35235] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,204 WARN [RS:0;jenkins-hbase4:35235] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:11,204 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46505,1690233070882] 2023-07-24 21:11:11,204 INFO [RS:0;jenkins-hbase4:35235] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:11,204 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40989,1690233070930] 2023-07-24 21:11:11,204 DEBUG [RS:2;jenkins-hbase4:40989] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,204 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35235,1690233070830] 2023-07-24 21:11:11,204 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,204 DEBUG [RS:1;jenkins-hbase4:46505] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,204 WARN [RS:2;jenkins-hbase4:40989] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:11,205 WARN [RS:1;jenkins-hbase4:46505] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:11,205 INFO [RS:2;jenkins-hbase4:40989] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:11,205 INFO [RS:1;jenkins-hbase4:46505] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:11,205 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,205 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,217 DEBUG [RS:0;jenkins-hbase4:35235] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,217 DEBUG [RS:2;jenkins-hbase4:40989] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,218 DEBUG [RS:0;jenkins-hbase4:35235] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,218 DEBUG [RS:2;jenkins-hbase4:40989] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,218 DEBUG [RS:0;jenkins-hbase4:35235] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,218 DEBUG [RS:2;jenkins-hbase4:40989] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,219 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:11,219 INFO [RS:0;jenkins-hbase4:35235] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:11,219 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:11,219 DEBUG [RS:1;jenkins-hbase4:46505] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,220 DEBUG [RS:1;jenkins-hbase4:46505] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,220 DEBUG [RS:1;jenkins-hbase4:46505] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,221 INFO [RS:2;jenkins-hbase4:40989] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:11,221 INFO [RS:0;jenkins-hbase4:35235] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:11,221 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:11,222 INFO [RS:1;jenkins-hbase4:46505] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:11,222 INFO [RS:0;jenkins-hbase4:35235] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:11,222 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,222 INFO [RS:2;jenkins-hbase4:40989] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:11,223 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:11,223 INFO [RS:2;jenkins-hbase4:40989] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:11,223 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,223 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:11,223 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,224 DEBUG [RS:0;jenkins-hbase4:35235] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,229 INFO [RS:1;jenkins-hbase4:46505] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:11,230 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,231 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,231 INFO [RS:1;jenkins-hbase4:46505] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:11,231 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,231 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,231 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,231 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,231 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,231 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,231 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:11,232 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,231 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,232 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:11,232 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,232 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,233 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,233 DEBUG [RS:2;jenkins-hbase4:40989] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,236 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,237 DEBUG [RS:1;jenkins-hbase4:46505] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:11,242 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,242 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,242 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,247 INFO [RS:0;jenkins-hbase4:35235] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:11,248 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35235,1690233070830-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,250 INFO [RS:2;jenkins-hbase4:40989] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:11,250 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40989,1690233070930-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,252 INFO [RS:1;jenkins-hbase4:46505] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:11,252 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46505,1690233070882-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,258 INFO [RS:0;jenkins-hbase4:35235] regionserver.Replication(203): jenkins-hbase4.apache.org,35235,1690233070830 started 2023-07-24 21:11:11,258 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35235,1690233070830, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35235, sessionid=0x101992c67a90001 2023-07-24 21:11:11,258 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:11,258 DEBUG [RS:0;jenkins-hbase4:35235] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35235,1690233070830' 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35235,1690233070830' 2023-07-24 21:11:11,259 DEBUG [RS:0;jenkins-hbase4:35235] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:11,260 DEBUG [RS:0;jenkins-hbase4:35235] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:11,260 DEBUG [RS:0;jenkins-hbase4:35235] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:11,260 INFO [RS:0;jenkins-hbase4:35235] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:11:11,260 INFO [RS:0;jenkins-hbase4:35235] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:11:11,260 INFO [RS:2;jenkins-hbase4:40989] regionserver.Replication(203): jenkins-hbase4.apache.org,40989,1690233070930 started 2023-07-24 21:11:11,260 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40989,1690233070930, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40989, sessionid=0x101992c67a90003 2023-07-24 21:11:11,260 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:11,260 DEBUG [RS:2;jenkins-hbase4:40989] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,260 DEBUG [RS:2;jenkins-hbase4:40989] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40989,1690233070930' 2023-07-24 21:11:11,260 DEBUG [RS:2;jenkins-hbase4:40989] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40989,1690233070930' 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:11,261 DEBUG [RS:2;jenkins-hbase4:40989] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:11,261 INFO [RS:2;jenkins-hbase4:40989] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:11:11,262 INFO [RS:2;jenkins-hbase4:40989] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:11:11,264 INFO [RS:1;jenkins-hbase4:46505] regionserver.Replication(203): jenkins-hbase4.apache.org,46505,1690233070882 started 2023-07-24 21:11:11,264 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46505,1690233070882, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46505, sessionid=0x101992c67a90002 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46505,1690233070882' 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46505,1690233070882' 2023-07-24 21:11:11,264 DEBUG [RS:1;jenkins-hbase4:46505] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:11,265 DEBUG [RS:1;jenkins-hbase4:46505] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:11,265 DEBUG [RS:1;jenkins-hbase4:46505] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:11,265 INFO [RS:1;jenkins-hbase4:46505] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:11:11,265 INFO [RS:1;jenkins-hbase4:46505] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:11:11,294 DEBUG [jenkins-hbase4:40875] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 21:11:11,294 DEBUG [jenkins-hbase4:40875] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:11,294 DEBUG [jenkins-hbase4:40875] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:11,294 DEBUG [jenkins-hbase4:40875] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:11,294 DEBUG [jenkins-hbase4:40875] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:11,294 DEBUG [jenkins-hbase4:40875] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:11,295 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40989,1690233070930, state=OPENING 2023-07-24 21:11:11,298 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 21:11:11,299 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:11,299 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:11:11,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40989,1690233070930}] 2023-07-24 21:11:11,362 INFO [RS:0;jenkins-hbase4:35235] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35235%2C1690233070830, suffix=, logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,35235,1690233070830, archiveDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs, maxLogs=32 2023-07-24 21:11:11,363 INFO [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40989%2C1690233070930, suffix=, logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,40989,1690233070930, archiveDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs, maxLogs=32 2023-07-24 21:11:11,366 INFO [RS:1;jenkins-hbase4:46505] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46505%2C1690233070882, suffix=, logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,46505,1690233070882, archiveDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs, maxLogs=32 2023-07-24 21:11:11,381 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK] 2023-07-24 21:11:11,381 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK] 2023-07-24 21:11:11,381 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK] 2023-07-24 21:11:11,381 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK] 2023-07-24 21:11:11,381 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK] 2023-07-24 21:11:11,381 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK] 2023-07-24 21:11:11,385 INFO [RS:0;jenkins-hbase4:35235] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,35235,1690233070830/jenkins-hbase4.apache.org%2C35235%2C1690233070830.1690233071362 2023-07-24 21:11:11,386 INFO [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,40989,1690233070930/jenkins-hbase4.apache.org%2C40989%2C1690233070930.1690233071363 2023-07-24 21:11:11,386 DEBUG [RS:0;jenkins-hbase4:35235] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK], DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK], DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK]] 2023-07-24 21:11:11,386 DEBUG [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK], DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK], DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK]] 2023-07-24 21:11:11,391 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK] 2023-07-24 21:11:11,391 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK] 2023-07-24 21:11:11,391 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK] 2023-07-24 21:11:11,394 INFO [RS:1;jenkins-hbase4:46505] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,46505,1690233070882/jenkins-hbase4.apache.org%2C46505%2C1690233070882.1690233071367 2023-07-24 21:11:11,394 DEBUG [RS:1;jenkins-hbase4:46505] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK], DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK], DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK]] 2023-07-24 21:11:11,395 WARN [ReadOnlyZKClient-127.0.0.1:53183@0x71e7e4d9] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 21:11:11,395 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:11,396 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48800, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:11,396 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40989] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:48800 deadline: 1690233131396, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,453 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:11,455 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:11:11,457 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48812, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:11:11,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 21:11:11,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:11,462 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40989%2C1690233070930.meta, suffix=.meta, logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,40989,1690233070930, archiveDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs, maxLogs=32 2023-07-24 21:11:11,478 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK] 2023-07-24 21:11:11,479 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK] 2023-07-24 21:11:11,479 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK] 2023-07-24 21:11:11,482 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,40989,1690233070930/jenkins-hbase4.apache.org%2C40989%2C1690233070930.meta.1690233071463.meta 2023-07-24 21:11:11,482 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK], DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK], DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK]] 2023-07-24 21:11:11,482 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:11,482 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:11:11,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 21:11:11,483 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 21:11:11,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 21:11:11,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 21:11:11,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 21:11:11,484 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 21:11:11,486 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/info 2023-07-24 21:11:11,486 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/info 2023-07-24 21:11:11,486 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 21:11:11,487 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,487 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 21:11:11,488 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:11:11,488 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/rep_barrier 2023-07-24 21:11:11,488 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 21:11:11,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 21:11:11,490 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/table 2023-07-24 21:11:11,490 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/table 2023-07-24 21:11:11,490 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 21:11:11,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,491 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740 2023-07-24 21:11:11,493 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740 2023-07-24 21:11:11,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 21:11:11,497 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 21:11:11,498 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10289296160, jitterRate=-0.04173462092876434}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 21:11:11,498 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 21:11:11,498 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690233071453 2023-07-24 21:11:11,502 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 21:11:11,503 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 21:11:11,504 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40989,1690233070930, state=OPEN 2023-07-24 21:11:11,505 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 21:11:11,505 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 21:11:11,509 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 21:11:11,509 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40989,1690233070930 in 206 msec 2023-07-24 21:11:11,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 21:11:11,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 368 msec 2023-07-24 21:11:11,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 423 msec 2023-07-24 21:11:11,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690233071515, completionTime=-1 2023-07-24 21:11:11,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 21:11:11,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 21:11:11,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 21:11:11,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690233131519 2023-07-24 21:11:11,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690233191519 2023-07-24 21:11:11,519 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40875,1690233070775-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40875,1690233070775-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40875,1690233070775-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40875, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 21:11:11,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:11,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 21:11:11,527 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 21:11:11,529 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:11,529 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:11,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2 empty. 2023-07-24 21:11:11,532 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,532 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 21:11:11,547 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:11,548 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 27dd1507a222123ca6685060e35b9ff2, NAME => 'hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp 2023-07-24 21:11:11,559 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,559 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 27dd1507a222123ca6685060e35b9ff2, disabling compactions & flushes 2023-07-24 21:11:11,559 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,559 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,559 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. after waiting 0 ms 2023-07-24 21:11:11,559 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,560 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,560 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 27dd1507a222123ca6685060e35b9ff2: 2023-07-24 21:11:11,563 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:11,563 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233071563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233071563"}]},"ts":"1690233071563"} 2023-07-24 21:11:11,566 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:11,567 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:11,567 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233071567"}]},"ts":"1690233071567"} 2023-07-24 21:11:11,568 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 21:11:11,570 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:11,570 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:11,570 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:11,570 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:11,570 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:11,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=27dd1507a222123ca6685060e35b9ff2, ASSIGN}] 2023-07-24 21:11:11,572 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=27dd1507a222123ca6685060e35b9ff2, ASSIGN 2023-07-24 21:11:11,573 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=27dd1507a222123ca6685060e35b9ff2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46505,1690233070882; forceNewPlan=false, retain=false 2023-07-24 21:11:11,699 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:11,701 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 21:11:11,702 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:11,703 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:11,704 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:11,705 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19 empty. 2023-07-24 21:11:11,705 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:11,705 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 21:11:11,723 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:11,723 INFO [jenkins-hbase4:40875] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:11:11,725 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=27dd1507a222123ca6685060e35b9ff2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,725 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233071724"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233071724"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233071724"}]},"ts":"1690233071724"} 2023-07-24 21:11:11,725 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 496e5d996ebcd2737a599fa1f8d8aa19, NAME => 'hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp 2023-07-24 21:11:11,726 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 27dd1507a222123ca6685060e35b9ff2, server=jenkins-hbase4.apache.org,46505,1690233070882}] 2023-07-24 21:11:11,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 496e5d996ebcd2737a599fa1f8d8aa19, disabling compactions & flushes 2023-07-24 21:11:11,740 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:11,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:11,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. after waiting 0 ms 2023-07-24 21:11:11,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:11,740 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:11,740 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 496e5d996ebcd2737a599fa1f8d8aa19: 2023-07-24 21:11:11,742 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:11,743 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233071743"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233071743"}]},"ts":"1690233071743"} 2023-07-24 21:11:11,744 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:11,745 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:11,745 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233071745"}]},"ts":"1690233071745"} 2023-07-24 21:11:11,746 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 21:11:11,750 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:11,750 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:11,750 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:11,750 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:11,750 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:11,750 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=496e5d996ebcd2737a599fa1f8d8aa19, ASSIGN}] 2023-07-24 21:11:11,751 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=496e5d996ebcd2737a599fa1f8d8aa19, ASSIGN 2023-07-24 21:11:11,752 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=496e5d996ebcd2737a599fa1f8d8aa19, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35235,1690233070830; forceNewPlan=false, retain=false 2023-07-24 21:11:11,885 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,885 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:11:11,895 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55114, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:11:11,898 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,898 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 27dd1507a222123ca6685060e35b9ff2, NAME => 'hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:11,898 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:11,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,900 INFO [StoreOpener-27dd1507a222123ca6685060e35b9ff2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,901 DEBUG [StoreOpener-27dd1507a222123ca6685060e35b9ff2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/info 2023-07-24 21:11:11,901 DEBUG [StoreOpener-27dd1507a222123ca6685060e35b9ff2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/info 2023-07-24 21:11:11,902 INFO [jenkins-hbase4:40875] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:11:11,903 INFO [StoreOpener-27dd1507a222123ca6685060e35b9ff2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 27dd1507a222123ca6685060e35b9ff2 columnFamilyName info 2023-07-24 21:11:11,903 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=496e5d996ebcd2737a599fa1f8d8aa19, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:11,904 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233071903"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233071903"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233071903"}]},"ts":"1690233071903"} 2023-07-24 21:11:11,904 INFO [StoreOpener-27dd1507a222123ca6685060e35b9ff2-1] regionserver.HStore(310): Store=27dd1507a222123ca6685060e35b9ff2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:11,906 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 496e5d996ebcd2737a599fa1f8d8aa19, server=jenkins-hbase4.apache.org,35235,1690233070830}] 2023-07-24 21:11:11,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:11,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:11,927 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 27dd1507a222123ca6685060e35b9ff2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9978275680, jitterRate=-0.07070066034793854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:11,927 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 27dd1507a222123ca6685060e35b9ff2: 2023-07-24 21:11:11,927 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2., pid=7, masterSystemTime=1690233071884 2023-07-24 21:11:11,936 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,937 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:11,937 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=27dd1507a222123ca6685060e35b9ff2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:11,937 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690233071937"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233071937"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233071937"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233071937"}]},"ts":"1690233071937"} 2023-07-24 21:11:11,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 21:11:11,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 27dd1507a222123ca6685060e35b9ff2, server=jenkins-hbase4.apache.org,46505,1690233070882 in 213 msec 2023-07-24 21:11:11,946 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 21:11:11,946 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=27dd1507a222123ca6685060e35b9ff2, ASSIGN in 373 msec 2023-07-24 21:11:11,946 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:11,946 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233071946"}]},"ts":"1690233071946"} 2023-07-24 21:11:11,948 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 21:11:11,950 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:11,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 424 msec 2023-07-24 21:11:12,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 21:11:12,030 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:12,030 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:12,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:12,035 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55116, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:12,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 21:11:12,046 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:12,050 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-24 21:11:12,060 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:12,060 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 21:11:12,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 21:11:12,063 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55330, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 21:11:12,065 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-24 21:11:12,066 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 21:11:12,068 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:12,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 496e5d996ebcd2737a599fa1f8d8aa19, NAME => 'hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:12,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 21:11:12,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. service=MultiRowMutationService 2023-07-24 21:11:12,069 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 21:11:12,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:12,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,071 INFO [StoreOpener-496e5d996ebcd2737a599fa1f8d8aa19-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,072 DEBUG [StoreOpener-496e5d996ebcd2737a599fa1f8d8aa19-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/m 2023-07-24 21:11:12,072 DEBUG [StoreOpener-496e5d996ebcd2737a599fa1f8d8aa19-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/m 2023-07-24 21:11:12,072 INFO [StoreOpener-496e5d996ebcd2737a599fa1f8d8aa19-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 496e5d996ebcd2737a599fa1f8d8aa19 columnFamilyName m 2023-07-24 21:11:12,073 INFO [StoreOpener-496e5d996ebcd2737a599fa1f8d8aa19-1] regionserver.HStore(310): Store=496e5d996ebcd2737a599fa1f8d8aa19/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:12,074 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,074 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,077 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:12,079 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:12,079 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 496e5d996ebcd2737a599fa1f8d8aa19; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@27627bfe, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:12,079 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 496e5d996ebcd2737a599fa1f8d8aa19: 2023-07-24 21:11:12,080 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19., pid=9, masterSystemTime=1690233072060 2023-07-24 21:11:12,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:12,085 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:12,085 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=496e5d996ebcd2737a599fa1f8d8aa19, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:12,085 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690233072085"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233072085"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233072085"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233072085"}]},"ts":"1690233072085"} 2023-07-24 21:11:12,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-24 21:11:12,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 496e5d996ebcd2737a599fa1f8d8aa19, server=jenkins-hbase4.apache.org,35235,1690233070830 in 181 msec 2023-07-24 21:11:12,090 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 21:11:12,090 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=496e5d996ebcd2737a599fa1f8d8aa19, ASSIGN in 338 msec 2023-07-24 21:11:12,120 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:12,124 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 62 msec 2023-07-24 21:11:12,124 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:12,125 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233072125"}]},"ts":"1690233072125"} 2023-07-24 21:11:12,126 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 21:11:12,129 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:12,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 430 msec 2023-07-24 21:11:12,135 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 21:11:12,137 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 21:11:12,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.170sec 2023-07-24 21:11:12,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 21:11:12,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 21:11:12,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 21:11:12,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40875,1690233070775-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 21:11:12,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40875,1690233070775-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 21:11:12,160 DEBUG [Listener at localhost/41541] zookeeper.ReadOnlyZKClient(139): Connect 0x38ebbdb8 to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:12,166 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 21:11:12,222 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:12,239 DEBUG [Listener at localhost/41541] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c428834, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:12,239 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55342, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:12,242 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 21:11:12,242 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 21:11:12,257 DEBUG [hconnection-0x39329603-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:12,259 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48814, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:12,261 INFO [Listener at localhost/41541] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:12,261 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:12,264 DEBUG [Listener at localhost/41541] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 21:11:12,265 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 21:11:12,266 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:12,266 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:12,267 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:11:12,268 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 21:11:12,273 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51428, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 21:11:12,279 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 21:11:12,280 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:12,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 21:11:12,281 DEBUG [Listener at localhost/41541] zookeeper.ReadOnlyZKClient(139): Connect 0x15a8754a to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:12,293 DEBUG [Listener at localhost/41541] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38876b72, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:12,294 INFO [Listener at localhost/41541] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53183 2023-07-24 21:11:12,300 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:12,302 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101992c67a9000a connected 2023-07-24 21:11:12,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:12,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:12,318 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 21:11:12,338 INFO [Listener at localhost/41541] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 21:11:12,338 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:12,338 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:12,338 INFO [Listener at localhost/41541] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 21:11:12,338 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 21:11:12,338 INFO [Listener at localhost/41541] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 21:11:12,339 INFO [Listener at localhost/41541] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 21:11:12,341 INFO [Listener at localhost/41541] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32963 2023-07-24 21:11:12,342 INFO [Listener at localhost/41541] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 21:11:12,346 DEBUG [Listener at localhost/41541] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 21:11:12,346 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:12,347 INFO [Listener at localhost/41541] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 21:11:12,349 INFO [Listener at localhost/41541] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32963 connecting to ZooKeeper ensemble=127.0.0.1:53183 2023-07-24 21:11:12,355 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:329630x0, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 21:11:12,360 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32963-0x101992c67a9000b connected 2023-07-24 21:11:12,360 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 21:11:12,362 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 21:11:12,363 DEBUG [Listener at localhost/41541] zookeeper.ZKUtil(164): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 21:11:12,363 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32963 2023-07-24 21:11:12,366 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32963 2023-07-24 21:11:12,369 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32963 2023-07-24 21:11:12,372 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32963 2023-07-24 21:11:12,372 DEBUG [Listener at localhost/41541] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32963 2023-07-24 21:11:12,374 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 21:11:12,375 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 21:11:12,375 INFO [Listener at localhost/41541] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 21:11:12,375 INFO [Listener at localhost/41541] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 21:11:12,375 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 21:11:12,376 INFO [Listener at localhost/41541] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 21:11:12,376 INFO [Listener at localhost/41541] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 21:11:12,376 INFO [Listener at localhost/41541] http.HttpServer(1146): Jetty bound to port 43943 2023-07-24 21:11:12,377 INFO [Listener at localhost/41541] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 21:11:12,380 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:12,380 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@303c9d88{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,AVAILABLE} 2023-07-24 21:11:12,380 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:12,381 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6eaad46f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-24 21:11:12,393 INFO [Listener at localhost/41541] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 21:11:12,393 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 21:11:12,393 INFO [Listener at localhost/41541] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 21:11:12,394 INFO [Listener at localhost/41541] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 21:11:12,394 INFO [Listener at localhost/41541] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 21:11:12,395 INFO [Listener at localhost/41541] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1791e06d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:12,396 INFO [Listener at localhost/41541] server.AbstractConnector(333): Started ServerConnector@69993185{HTTP/1.1, (http/1.1)}{0.0.0.0:43943} 2023-07-24 21:11:12,396 INFO [Listener at localhost/41541] server.Server(415): Started @43042ms 2023-07-24 21:11:12,400 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(951): ClusterId : 8e6fe8ff-ad81-422a-9d1f-2addc6588c39 2023-07-24 21:11:12,400 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 21:11:12,402 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 21:11:12,402 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 21:11:12,404 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 21:11:12,404 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ReadOnlyZKClient(139): Connect 0x2f5d82a7 to 127.0.0.1:53183 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 21:11:12,409 DEBUG [RS:3;jenkins-hbase4:32963] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@219a793c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 21:11:12,409 DEBUG [RS:3;jenkins-hbase4:32963] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25a5e9d7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:12,417 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:32963 2023-07-24 21:11:12,417 INFO [RS:3;jenkins-hbase4:32963] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 21:11:12,417 INFO [RS:3;jenkins-hbase4:32963] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 21:11:12,417 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 21:11:12,418 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40875,1690233070775 with isa=jenkins-hbase4.apache.org/172.31.14.131:32963, startcode=1690233072337 2023-07-24 21:11:12,418 DEBUG [RS:3;jenkins-hbase4:32963] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 21:11:12,420 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42009, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 21:11:12,420 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40875] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,420 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 21:11:12,421 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40 2023-07-24 21:11:12,421 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46175 2023-07-24 21:11:12,421 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44127 2023-07-24 21:11:12,427 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:12,427 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:12,427 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:12,427 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:12,427 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:12,428 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,428 WARN [RS:3;jenkins-hbase4:32963] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 21:11:12,428 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 21:11:12,428 INFO [RS:3;jenkins-hbase4:32963] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 21:11:12,428 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:12,428 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32963,1690233072337] 2023-07-24 21:11:12,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:12,428 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:12,429 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 21:11:12,429 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:12,429 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:12,430 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:12,430 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:12,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:12,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:12,432 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:12,432 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:12,432 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,433 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ZKUtil(162): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:12,433 DEBUG [RS:3;jenkins-hbase4:32963] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 21:11:12,433 INFO [RS:3;jenkins-hbase4:32963] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 21:11:12,435 INFO [RS:3;jenkins-hbase4:32963] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 21:11:12,435 INFO [RS:3;jenkins-hbase4:32963] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 21:11:12,435 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:12,435 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 21:11:12,436 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:12,436 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,436 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,436 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,437 DEBUG [RS:3;jenkins-hbase4:32963] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 21:11:12,438 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:12,438 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:12,438 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:12,449 INFO [RS:3;jenkins-hbase4:32963] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 21:11:12,449 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32963,1690233072337-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 21:11:12,459 INFO [RS:3;jenkins-hbase4:32963] regionserver.Replication(203): jenkins-hbase4.apache.org,32963,1690233072337 started 2023-07-24 21:11:12,459 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32963,1690233072337, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32963, sessionid=0x101992c67a9000b 2023-07-24 21:11:12,459 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 21:11:12,459 DEBUG [RS:3;jenkins-hbase4:32963] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,459 DEBUG [RS:3;jenkins-hbase4:32963] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32963,1690233072337' 2023-07-24 21:11:12,459 DEBUG [RS:3;jenkins-hbase4:32963] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32963,1690233072337' 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 21:11:12,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:12,460 DEBUG [RS:3;jenkins-hbase4:32963] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 21:11:12,461 DEBUG [RS:3;jenkins-hbase4:32963] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 21:11:12,461 INFO [RS:3;jenkins-hbase4:32963] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 21:11:12,461 INFO [RS:3;jenkins-hbase4:32963] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 21:11:12,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:12,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:12,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:12,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:12,467 DEBUG [hconnection-0x11f7f40c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:12,468 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48820, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:12,471 DEBUG [hconnection-0x11f7f40c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 21:11:12,476 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55350, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 21:11:12,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:12,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:12,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:12,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:12,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51428 deadline: 1690234272480, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:12,480 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:12,481 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:12,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:12,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:12,482 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:12,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:12,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:12,543 INFO [Listener at localhost/41541] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=565 (was 514) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:46175 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/41541-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/41541-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:46175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x2f5d82a7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp132733128-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:41467 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36583,1690233065756 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7236d8d0 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-88076854_17 at /127.0.0.1:37146 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data1/current/BP-796522066-172.31.14.131-1690233070025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-72ddbce8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 44105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@165b3075 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:32963 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data6/current/BP-796522066-172.31.14.131-1690233070025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35235Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233071103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53256@0x7fd43610-SendThread(127.0.0.1:53256) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: 7569411@qtp-803099987-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34157 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x71b0c722-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:46175 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:41467 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1165525960-2578 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1550833951_17 at /127.0.0.1:52900 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x75c8943a-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp132733128-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 46175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40875,1690233070775 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Listener at localhost/41541-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 46175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_465544215_17 at /127.0.0.1:37192 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x39329603-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40-prefix:jenkins-hbase4.apache.org,40989,1690233070930.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp350742907-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp828800360-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2234 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 46175 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data3/current/BP-796522066-172.31.14.131-1690233070025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-471b1f71-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server idle connection scanner for port 36535 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_465544215_17 at /127.0.0.1:52886 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 44105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x05131c58 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData-prefix:jenkins-hbase4.apache.org,40875,1690233070775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:46175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 44105 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp132733128-2265-acceptor-0@62d25bfb-ServerConnector@4e9584e1{HTTP/1.1, (http/1.1)}{0.0.0.0:35295} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x15a8754a-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp335322925-2204-acceptor-0@22d045c2-ServerConnector@74a4f3c2{HTTP/1.1, (http/1.1)}{0.0.0.0:44127} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x2f5d82a7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:46175 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x38ebbdb8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2cc506f6 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1550833951_17 at /127.0.0.1:41526 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data4/current/BP-796522066-172.31.14.131-1690233070025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:41467 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp335322925-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x71b0c722-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1428e7c9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:40989Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp350742907-2295-acceptor-0@51f70800-ServerConnector@727fe86{HTTP/1.1, (http/1.1)}{0.0.0.0:40667} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x75c8943a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 44105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1550833951_17 at /127.0.0.1:41550 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1165525960-2577 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5452b96b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:40989 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 41541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x38ebbdb8-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1165525960-2580 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40-prefix:jenkins-hbase4.apache.org,46505,1690233070882 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:32963-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11f7f40c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40-prefix:jenkins-hbase4.apache.org,40989,1690233070930 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:46175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4466a578[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: hconnection-0x11f7f40c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:46175 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp828800360-2309-acceptor-0@7e2801b8-ServerConnector@3fa19534{HTTP/1.1, (http/1.1)}{0.0.0.0:40333} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35235 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x67397cdc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@25d07b0a[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_465544215_17 at /127.0.0.1:41542 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp350742907-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp828800360-2305 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data5/current/BP-796522066-172.31.14.131-1690233070025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp335322925-2206 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp132733128-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-88076854_17 at /127.0.0.1:52826 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x71e7e4d9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x15a8754a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@267fa5f3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:32963Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-5217d006-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@238d22a4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp335322925-2205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 7480945@qtp-1150663199-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp350742907-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 1124773126@qtp-2073670254-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45205 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Server handler 0 on default port 46175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1165525960-2576 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36605-SendThread(127.0.0.1:53256) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@42c43c67 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:41467 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp132733128-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x38ebbdb8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 41541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x67397cdc-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:46175 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1165525960-2573 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1ac8173c java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:46175 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1550833951_17 at /127.0.0.1:37186 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2235-acceptor-0@509c2034-ServerConnector@716c1dd1{HTTP/1.1, (http/1.1)}{0.0.0.0:36369} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x05131c58-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1165525960-2574-acceptor-0@1c7c00df-ServerConnector@69993185{HTTP/1.1, (http/1.1)}{0.0.0.0:43943} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp828800360-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 21877538@qtp-1292148484-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40809 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 46175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:0;jenkins-hbase4:35235-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46505-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp335322925-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46505Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x71b0c722-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_465544215_17 at /127.0.0.1:37194 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41541.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x71b0c722-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:41467 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp132733128-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x05131c58-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-74440369-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1900271501@qtp-1150663199-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45143 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:41467 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1550833951_17 at /127.0.0.1:37208 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:46175 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp350742907-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp335322925-2203 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x2f5d82a7-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp828800360-2307 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x15a8754a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x71b0c722-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41467 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(187831341) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1374921_17 at /127.0.0.1:41446 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1374921_17 at /127.0.0.1:41510 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:41467 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x71b0c722-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233071103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp132733128-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp828800360-2308 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@7b6d1808 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46505 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1374921_17 at /127.0.0.1:52864 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1550833951_17 at /127.0.0.1:52880 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:53183): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp132733128-2264 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 729849042@qtp-803099987-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x67397cdc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53256@0x7fd43610-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-405644c1-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@e5b9050 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp828800360-2306 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data2/current/BP-796522066-172.31.14.131-1690233070025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x71e7e4d9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp350742907-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x71e7e4d9-SendThread(127.0.0.1:53183) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53183@0x75c8943a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40-prefix:jenkins-hbase4.apache.org,35235,1690233070830 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41541.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp335322925-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1095231840-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1374921_17 at /127.0.0.1:37172 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x71b0c722-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-88076854_17 at /127.0.0.1:41480 [Receiving block BP-796522066-172.31.14.131-1690233070025:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1705152761) connection to localhost/127.0.0.1:41467 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 44105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-796522066-172.31.14.131-1690233070025:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1275726783@qtp-1292148484-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: 1092857492@qtp-2073670254-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 4 on default port 46175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x71b0c722-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1165525960-2575 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41541 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 36535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@59ee27ae[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2ce2720a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp335322925-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53256@0x7fd43610 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1891415359.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1165525960-2579 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp828800360-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@4731d80e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp350742907-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32963 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:2;jenkins-hbase4:40989-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/41541-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40875 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp350742907-2294 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/685787390.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:40875 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41541-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:53183 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=843 (was 788) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 448) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=5313 (was 5618) 2023-07-24 21:11:12,547 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-24 21:11:12,563 INFO [RS:3;jenkins-hbase4:32963] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32963%2C1690233072337, suffix=, logDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,32963,1690233072337, archiveDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs, maxLogs=32 2023-07-24 21:11:12,568 INFO [Listener at localhost/41541] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=565, OpenFileDescriptor=843, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=177, AvailableMemoryMB=5311 2023-07-24 21:11:12,568 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-24 21:11:12,568 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-24 21:11:12,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:12,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:12,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:12,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:12,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:12,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:12,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:12,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:12,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:12,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:12,588 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK] 2023-07-24 21:11:12,588 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK] 2023-07-24 21:11:12,588 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK] 2023-07-24 21:11:12,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:12,591 INFO [RS:3;jenkins-hbase4:32963] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,32963,1690233072337/jenkins-hbase4.apache.org%2C32963%2C1690233072337.1690233072563 2023-07-24 21:11:12,592 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:12,593 DEBUG [RS:3;jenkins-hbase4:32963] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34447,DS-b66a39dc-d88d-4f09-b12d-8b6e94b254e0,DISK], DatanodeInfoWithStorage[127.0.0.1:45113,DS-5141bebe-3b3c-4eb6-8110-d4665d5de470,DISK], DatanodeInfoWithStorage[127.0.0.1:39239,DS-1b519d08-d56e-48e3-906e-f7dc1dc98bb7,DISK]] 2023-07-24 21:11:12,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:12,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:12,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:12,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:12,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:12,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:12,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:12,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:12,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:12,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51428 deadline: 1690234272602, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:12,603 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:12,604 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:12,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:12,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:12,605 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:12,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:12,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:12,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:12,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 21:11:12,610 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:12,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-24 21:11:12,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 21:11:12,612 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:12,613 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:12,613 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:12,615 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 21:11:12,616 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,617 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 empty. 2023-07-24 21:11:12,618 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,618 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 21:11:12,633 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-24 21:11:12,638 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9deaf3eb1b1c708a4ec2befba6333684, NAME => 't1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp 2023-07-24 21:11:12,656 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:12,656 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 9deaf3eb1b1c708a4ec2befba6333684, disabling compactions & flushes 2023-07-24 21:11:12,656 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:12,656 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:12,656 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. after waiting 0 ms 2023-07-24 21:11:12,657 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:12,657 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:12,657 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 9deaf3eb1b1c708a4ec2befba6333684: 2023-07-24 21:11:12,659 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 21:11:12,660 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233072660"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233072660"}]},"ts":"1690233072660"} 2023-07-24 21:11:12,661 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 21:11:12,662 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 21:11:12,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233072662"}]},"ts":"1690233072662"} 2023-07-24 21:11:12,665 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-24 21:11:12,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 21:11:12,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 21:11:12,669 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 21:11:12,669 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 21:11:12,669 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 21:11:12,669 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 21:11:12,669 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, ASSIGN}] 2023-07-24 21:11:12,670 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, ASSIGN 2023-07-24 21:11:12,671 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40989,1690233070930; forceNewPlan=false, retain=false 2023-07-24 21:11:12,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 21:11:12,821 INFO [jenkins-hbase4:40875] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 21:11:12,823 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=9deaf3eb1b1c708a4ec2befba6333684, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:12,823 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233072823"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233072823"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233072823"}]},"ts":"1690233072823"} 2023-07-24 21:11:12,824 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 9deaf3eb1b1c708a4ec2befba6333684, server=jenkins-hbase4.apache.org,40989,1690233070930}] 2023-07-24 21:11:12,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 21:11:12,980 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:12,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9deaf3eb1b1c708a4ec2befba6333684, NAME => 't1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.', STARTKEY => '', ENDKEY => ''} 2023-07-24 21:11:12,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 21:11:12,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,985 INFO [StoreOpener-9deaf3eb1b1c708a4ec2befba6333684-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,987 DEBUG [StoreOpener-9deaf3eb1b1c708a4ec2befba6333684-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/cf1 2023-07-24 21:11:12,987 DEBUG [StoreOpener-9deaf3eb1b1c708a4ec2befba6333684-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/cf1 2023-07-24 21:11:12,988 INFO [StoreOpener-9deaf3eb1b1c708a4ec2befba6333684-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9deaf3eb1b1c708a4ec2befba6333684 columnFamilyName cf1 2023-07-24 21:11:12,988 INFO [StoreOpener-9deaf3eb1b1c708a4ec2befba6333684-1] regionserver.HStore(310): Store=9deaf3eb1b1c708a4ec2befba6333684/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 21:11:12,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:12,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:13,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 21:11:13,010 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9deaf3eb1b1c708a4ec2befba6333684; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11721540640, jitterRate=0.09165354073047638}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 21:11:13,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9deaf3eb1b1c708a4ec2befba6333684: 2023-07-24 21:11:13,011 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684., pid=14, masterSystemTime=1690233072976 2023-07-24 21:11:13,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:13,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:13,013 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=9deaf3eb1b1c708a4ec2befba6333684, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:13,014 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233073013"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690233073013"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690233073013"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690233073013"}]},"ts":"1690233073013"} 2023-07-24 21:11:13,017 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-24 21:11:13,017 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 9deaf3eb1b1c708a4ec2befba6333684, server=jenkins-hbase4.apache.org,40989,1690233070930 in 191 msec 2023-07-24 21:11:13,018 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 21:11:13,018 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, ASSIGN in 348 msec 2023-07-24 21:11:13,019 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 21:11:13,019 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233073019"}]},"ts":"1690233073019"} 2023-07-24 21:11:13,020 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-24 21:11:13,022 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 21:11:13,023 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 414 msec 2023-07-24 21:11:13,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 21:11:13,216 INFO [Listener at localhost/41541] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-24 21:11:13,216 DEBUG [Listener at localhost/41541] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-24 21:11:13,216 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,220 INFO [Listener at localhost/41541] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-24 21:11:13,221 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,221 INFO [Listener at localhost/41541] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-24 21:11:13,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 21:11:13,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 21:11:13,226 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 21:11:13,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 21:11:13,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:51428 deadline: 1690233133222, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-24 21:11:13,228 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,231 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=7 msec 2023-07-24 21:11:13,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:13,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:13,330 INFO [Listener at localhost/41541] client.HBaseAdmin$15(890): Started disable of t1 2023-07-24 21:11:13,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-24 21:11:13,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-24 21:11:13,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 21:11:13,335 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233073335"}]},"ts":"1690233073335"} 2023-07-24 21:11:13,340 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-24 21:11:13,343 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-24 21:11:13,344 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, UNASSIGN}] 2023-07-24 21:11:13,344 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, UNASSIGN 2023-07-24 21:11:13,345 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=9deaf3eb1b1c708a4ec2befba6333684, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:13,345 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233073345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690233073345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690233073345"}]},"ts":"1690233073345"} 2023-07-24 21:11:13,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 9deaf3eb1b1c708a4ec2befba6333684, server=jenkins-hbase4.apache.org,40989,1690233070930}] 2023-07-24 21:11:13,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 21:11:13,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:13,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9deaf3eb1b1c708a4ec2befba6333684, disabling compactions & flushes 2023-07-24 21:11:13,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:13,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:13,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. after waiting 0 ms 2023-07-24 21:11:13,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:13,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 21:11:13,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684. 2023-07-24 21:11:13,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9deaf3eb1b1c708a4ec2befba6333684: 2023-07-24 21:11:13,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:13,505 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=9deaf3eb1b1c708a4ec2befba6333684, regionState=CLOSED 2023-07-24 21:11:13,505 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690233073505"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690233073505"}]},"ts":"1690233073505"} 2023-07-24 21:11:13,511 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 21:11:13,512 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 9deaf3eb1b1c708a4ec2befba6333684, server=jenkins-hbase4.apache.org,40989,1690233070930 in 160 msec 2023-07-24 21:11:13,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 21:11:13,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=9deaf3eb1b1c708a4ec2befba6333684, UNASSIGN in 168 msec 2023-07-24 21:11:13,514 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690233073513"}]},"ts":"1690233073513"} 2023-07-24 21:11:13,515 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-24 21:11:13,516 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-24 21:11:13,519 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 187 msec 2023-07-24 21:11:13,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 21:11:13,639 INFO [Listener at localhost/41541] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-24 21:11:13,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-24 21:11:13,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-24 21:11:13,642 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 21:11:13,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-24 21:11:13,643 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-24 21:11:13,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:13,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:13,646 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:13,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 21:11:13,648 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/cf1, FileablePath, hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/recovered.edits] 2023-07-24 21:11:13,653 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/recovered.edits/4.seqid to hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/archive/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684/recovered.edits/4.seqid 2023-07-24 21:11:13,654 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/.tmp/data/default/t1/9deaf3eb1b1c708a4ec2befba6333684 2023-07-24 21:11:13,654 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 21:11:13,656 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-24 21:11:13,658 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-24 21:11:13,659 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-24 21:11:13,660 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-24 21:11:13,660 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-24 21:11:13,661 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690233073660"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:13,662 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 21:11:13,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9deaf3eb1b1c708a4ec2befba6333684, NAME => 't1,,1690233072607.9deaf3eb1b1c708a4ec2befba6333684.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 21:11:13,662 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-24 21:11:13,662 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690233073662"}]},"ts":"9223372036854775807"} 2023-07-24 21:11:13,663 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-24 21:11:13,666 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 21:11:13,668 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 26 msec 2023-07-24 21:11:13,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 21:11:13,748 INFO [Listener at localhost/41541] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-24 21:11:13,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:13,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:13,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:13,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:13,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:13,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:13,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:13,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:13,764 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:13,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:13,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:13,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:13,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:13,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:13,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:13,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51428 deadline: 1690234273780, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:13,781 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:13,785 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,786 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:13,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:13,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:13,805 INFO [Listener at localhost/41541] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=574 (was 565) - Thread LEAK? -, OpenFileDescriptor=848 (was 843) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=177 (was 177), AvailableMemoryMB=5259 (was 5311) 2023-07-24 21:11:13,805 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-24 21:11:13,822 INFO [Listener at localhost/41541] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574, OpenFileDescriptor=848, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=177, AvailableMemoryMB=5258 2023-07-24 21:11:13,823 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-24 21:11:13,823 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-24 21:11:13,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:13,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:13,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:13,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:13,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:13,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:13,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:13,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:13,836 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:13,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:13,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:13,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:13,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:13,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:13,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:13,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51428 deadline: 1690234273845, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:13,846 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:13,847 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,848 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:13,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:13,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:13,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 21:11:13,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:11:13,851 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-24 21:11:13,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 21:11:13,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 21:11:13,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:13,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:13,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:13,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:13,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:13,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:13,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:13,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:13,868 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:13,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:13,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:13,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:13,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:13,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:13,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:13,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51428 deadline: 1690234273878, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:13,879 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:13,880 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,882 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:13,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:13,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:13,901 INFO [Listener at localhost/41541] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 574) - Thread LEAK? -, OpenFileDescriptor=842 (was 848), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=177 (was 177), AvailableMemoryMB=5258 (was 5258) 2023-07-24 21:11:13,901 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-24 21:11:13,920 INFO [Listener at localhost/41541] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=177, AvailableMemoryMB=5258 2023-07-24 21:11:13,920 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-24 21:11:13,920 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-24 21:11:13,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:13,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:13,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:13,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:13,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:13,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:13,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:13,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:13,933 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:13,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:13,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:13,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:13,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:13,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:13,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:13,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51428 deadline: 1690234273945, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:13,945 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:13,947 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,948 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:13,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:13,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:13,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:13,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:13,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:13,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:13,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:13,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:13,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:13,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:13,964 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:13,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:13,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:13,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:13,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:13,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:13,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:13,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:13,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51428 deadline: 1690234273972, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:13,973 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:13,975 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:13,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:13,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:13,976 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:13,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:13,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:13,997 INFO [Listener at localhost/41541] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=177 (was 177), AvailableMemoryMB=5258 (was 5258) 2023-07-24 21:11:13,997 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-24 21:11:14,015 INFO [Listener at localhost/41541] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=177, AvailableMemoryMB=5257 2023-07-24 21:11:14,015 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-24 21:11:14,015 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-24 21:11:14,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:14,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:14,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:14,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:14,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:14,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:14,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:14,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:14,032 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:14,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:14,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:14,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:14,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:14,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:14,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:14,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51428 deadline: 1690234274042, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:14,042 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:14,044 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:14,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,045 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:14,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:14,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:14,046 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-24 21:11:14,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-24 21:11:14,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 21:11:14,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:14,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 21:11:14,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:14,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 21:11:14,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 21:11:14,068 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:14,070 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 21:11:14,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-24 21:11:14,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 21:11:14,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 21:11:14,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:14,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:51428 deadline: 1690234274163, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-24 21:11:14,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 21:11:14,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 21:11:14,185 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 21:11:14,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-24 21:11:14,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 21:11:14,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-24 21:11:14,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 21:11:14,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 21:11:14,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:14,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 21:11:14,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:14,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 21:11:14,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,299 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,301 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 21:11:14,303 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,304 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 21:11:14,304 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 21:11:14,305 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,306 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 21:11:14,307 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-24 21:11:14,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 21:11:14,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 21:11:14,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 21:11:14,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:14,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 21:11:14,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:14,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:14,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:51428 deadline: 1690233134415, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-24 21:11:14,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:14,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:14,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:14,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:14,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:14,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-24 21:11:14,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:14,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 21:11:14,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:14,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 21:11:14,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 21:11:14,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 21:11:14,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 21:11:14,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 21:11:14,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 21:11:14,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 21:11:14,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 21:11:14,438 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 21:11:14,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 21:11:14,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 21:11:14,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 21:11:14,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 21:11:14,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 21:11:14,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40875] to rsgroup master 2023-07-24 21:11:14,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 21:11:14,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51428 deadline: 1690234274446, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. 2023-07-24 21:11:14,447 WARN [Listener at localhost/41541] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40875 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 21:11:14,449 INFO [Listener at localhost/41541] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 21:11:14,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 21:11:14,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 21:11:14,450 INFO [Listener at localhost/41541] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32963, jenkins-hbase4.apache.org:35235, jenkins-hbase4.apache.org:40989, jenkins-hbase4.apache.org:46505], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 21:11:14,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 21:11:14,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40875] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 21:11:14,468 INFO [Listener at localhost/41541] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575 (was 576), OpenFileDescriptor=841 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=177 (was 177), AvailableMemoryMB=5265 (was 5257) - AvailableMemoryMB LEAK? - 2023-07-24 21:11:14,469 WARN [Listener at localhost/41541] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-24 21:11:14,469 INFO [Listener at localhost/41541] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 21:11:14,469 INFO [Listener at localhost/41541] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 21:11:14,469 DEBUG [Listener at localhost/41541] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x38ebbdb8 to 127.0.0.1:53183 2023-07-24 21:11:14,469 DEBUG [Listener at localhost/41541] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,469 DEBUG [Listener at localhost/41541] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 21:11:14,469 DEBUG [Listener at localhost/41541] util.JVMClusterUtil(257): Found active master hash=333772204, stopped=false 2023-07-24 21:11:14,469 DEBUG [Listener at localhost/41541] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 21:11:14,469 DEBUG [Listener at localhost/41541] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 21:11:14,469 INFO [Listener at localhost/41541] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:14,471 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:14,471 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:14,471 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:14,471 INFO [Listener at localhost/41541] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 21:11:14,471 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:14,471 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 21:11:14,472 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:14,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:14,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:14,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:14,472 DEBUG [Listener at localhost/41541] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x71e7e4d9 to 127.0.0.1:53183 2023-07-24 21:11:14,472 DEBUG [Listener at localhost/41541] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:14,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 21:11:14,472 INFO [Listener at localhost/41541] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35235,1690233070830' ***** 2023-07-24 21:11:14,473 INFO [Listener at localhost/41541] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:14,473 INFO [Listener at localhost/41541] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46505,1690233070882' ***** 2023-07-24 21:11:14,473 INFO [Listener at localhost/41541] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:14,473 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:14,473 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:14,473 INFO [Listener at localhost/41541] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40989,1690233070930' ***** 2023-07-24 21:11:14,476 INFO [Listener at localhost/41541] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:14,476 INFO [Listener at localhost/41541] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32963,1690233072337' ***** 2023-07-24 21:11:14,476 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:14,476 INFO [Listener at localhost/41541] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 21:11:14,478 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:14,480 INFO [RS:0;jenkins-hbase4:35235] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@338e392b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:14,480 INFO [RS:1;jenkins-hbase4:46505] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7d780e1a{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:14,482 INFO [RS:2;jenkins-hbase4:40989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@76d07aad{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:14,483 INFO [RS:0;jenkins-hbase4:35235] server.AbstractConnector(383): Stopped ServerConnector@716c1dd1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:14,483 INFO [RS:3;jenkins-hbase4:32963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1791e06d{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-24 21:11:14,483 INFO [RS:0;jenkins-hbase4:35235] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:14,483 INFO [RS:2;jenkins-hbase4:40989] server.AbstractConnector(383): Stopped ServerConnector@727fe86{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:14,483 INFO [RS:1;jenkins-hbase4:46505] server.AbstractConnector(383): Stopped ServerConnector@4e9584e1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:14,484 INFO [RS:0;jenkins-hbase4:35235] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ee9182{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:14,484 INFO [RS:3;jenkins-hbase4:32963] server.AbstractConnector(383): Stopped ServerConnector@69993185{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:14,484 INFO [RS:1;jenkins-hbase4:46505] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:14,484 INFO [RS:2;jenkins-hbase4:40989] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:14,485 INFO [RS:3;jenkins-hbase4:32963] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:14,485 INFO [RS:0;jenkins-hbase4:35235] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40a15cc5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:14,486 INFO [RS:1;jenkins-hbase4:46505] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e0fd42b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:14,487 INFO [RS:2;jenkins-hbase4:40989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c727e91{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:14,487 INFO [RS:3;jenkins-hbase4:32963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6eaad46f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:14,489 INFO [RS:1;jenkins-hbase4:46505] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@bb562ac{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:14,490 INFO [RS:2;jenkins-hbase4:40989] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@246b4380{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:14,490 INFO [RS:0;jenkins-hbase4:35235] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:14,491 INFO [RS:0;jenkins-hbase4:35235] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:14,490 INFO [RS:3;jenkins-hbase4:32963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@303c9d88{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:14,491 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:14,491 INFO [RS:2;jenkins-hbase4:40989] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:14,491 INFO [RS:2;jenkins-hbase4:40989] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:14,491 INFO [RS:2;jenkins-hbase4:40989] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:14,491 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:14,491 INFO [RS:3;jenkins-hbase4:32963] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:14,491 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:14,491 INFO [RS:3;jenkins-hbase4:32963] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:14,491 INFO [RS:0;jenkins-hbase4:35235] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:14,491 INFO [RS:3;jenkins-hbase4:32963] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:14,491 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(3305): Received CLOSE for 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:14,492 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:14,491 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:14,492 DEBUG [RS:3;jenkins-hbase4:32963] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f5d82a7 to 127.0.0.1:53183 2023-07-24 21:11:14,491 DEBUG [RS:2;jenkins-hbase4:40989] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x05131c58 to 127.0.0.1:53183 2023-07-24 21:11:14,491 INFO [RS:1;jenkins-hbase4:46505] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 21:11:14,492 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:14,492 INFO [RS:1;jenkins-hbase4:46505] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 21:11:14,492 INFO [RS:1;jenkins-hbase4:46505] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 21:11:14,492 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(3305): Received CLOSE for 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:14,492 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 21:11:14,492 DEBUG [RS:2;jenkins-hbase4:40989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,493 INFO [RS:2;jenkins-hbase4:40989] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:14,493 INFO [RS:2;jenkins-hbase4:40989] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:14,493 INFO [RS:2;jenkins-hbase4:40989] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:14,493 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 21:11:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 496e5d996ebcd2737a599fa1f8d8aa19, disabling compactions & flushes 2023-07-24 21:11:14,492 DEBUG [RS:3;jenkins-hbase4:32963] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,493 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32963,1690233072337; all regions closed. 2023-07-24 21:11:14,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. after waiting 0 ms 2023-07-24 21:11:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:14,492 DEBUG [RS:0;jenkins-hbase4:35235] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67397cdc to 127.0.0.1:53183 2023-07-24 21:11:14,493 DEBUG [RS:0;jenkins-hbase4:35235] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 496e5d996ebcd2737a599fa1f8d8aa19 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-24 21:11:14,493 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 21:11:14,494 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1478): Online Regions={496e5d996ebcd2737a599fa1f8d8aa19=hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19.} 2023-07-24 21:11:14,494 DEBUG [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1504): Waiting on 496e5d996ebcd2737a599fa1f8d8aa19 2023-07-24 21:11:14,494 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:14,494 DEBUG [RS:1;jenkins-hbase4:46505] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75c8943a to 127.0.0.1:53183 2023-07-24 21:11:14,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 27dd1507a222123ca6685060e35b9ff2, disabling compactions & flushes 2023-07-24 21:11:14,494 DEBUG [RS:1;jenkins-hbase4:46505] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,494 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 21:11:14,494 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1478): Online Regions={27dd1507a222123ca6685060e35b9ff2=hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2.} 2023-07-24 21:11:14,494 DEBUG [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1504): Waiting on 27dd1507a222123ca6685060e35b9ff2 2023-07-24 21:11:14,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:14,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:14,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. after waiting 0 ms 2023-07-24 21:11:14,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:14,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 27dd1507a222123ca6685060e35b9ff2 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-24 21:11:14,499 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 21:11:14,499 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-24 21:11:14,499 DEBUG [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 21:11:14,499 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 21:11:14,499 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 21:11:14,500 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 21:11:14,500 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 21:11:14,500 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 21:11:14,500 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-24 21:11:14,502 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 21:11:14,502 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 21:11:14,505 DEBUG [RS:3;jenkins-hbase4:32963] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs 2023-07-24 21:11:14,505 INFO [RS:3;jenkins-hbase4:32963] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32963%2C1690233072337:(num 1690233072563) 2023-07-24 21:11:14,505 DEBUG [RS:3;jenkins-hbase4:32963] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,505 INFO [RS:3;jenkins-hbase4:32963] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,505 INFO [RS:3;jenkins-hbase4:32963] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:14,505 INFO [RS:3;jenkins-hbase4:32963] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:14,505 INFO [RS:3;jenkins-hbase4:32963] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:14,505 INFO [RS:3;jenkins-hbase4:32963] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:14,507 INFO [RS:3;jenkins-hbase4:32963] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32963 2023-07-24 21:11:14,507 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:14,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/.tmp/info/546a925d14444b21a4f28395f5fb349a 2023-07-24 21:11:14,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/.tmp/m/708f619fcee243089096e33ed2f439a0 2023-07-24 21:11:14,526 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/.tmp/info/6544ab74cc414b1c8c5c0974d2f880fe 2023-07-24 21:11:14,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 546a925d14444b21a4f28395f5fb349a 2023-07-24 21:11:14,531 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6544ab74cc414b1c8c5c0974d2f880fe 2023-07-24 21:11:14,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 708f619fcee243089096e33ed2f439a0 2023-07-24 21:11:14,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/.tmp/info/546a925d14444b21a4f28395f5fb349a as hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/info/546a925d14444b21a4f28395f5fb349a 2023-07-24 21:11:14,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/.tmp/m/708f619fcee243089096e33ed2f439a0 as hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/m/708f619fcee243089096e33ed2f439a0 2023-07-24 21:11:14,535 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 546a925d14444b21a4f28395f5fb349a 2023-07-24 21:11:14,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/info/546a925d14444b21a4f28395f5fb349a, entries=3, sequenceid=9, filesize=5.0 K 2023-07-24 21:11:14,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 708f619fcee243089096e33ed2f439a0 2023-07-24 21:11:14,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/m/708f619fcee243089096e33ed2f439a0, entries=12, sequenceid=29, filesize=5.4 K 2023-07-24 21:11:14,540 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 27dd1507a222123ca6685060e35b9ff2 in 46ms, sequenceid=9, compaction requested=false 2023-07-24 21:11:14,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 496e5d996ebcd2737a599fa1f8d8aa19 in 48ms, sequenceid=29, compaction requested=false 2023-07-24 21:11:14,541 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,558 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,564 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/.tmp/rep_barrier/2046d4cb3f124aaeb622ee512b5e1ada 2023-07-24 21:11:14,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/rsgroup/496e5d996ebcd2737a599fa1f8d8aa19/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-24 21:11:14,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/namespace/27dd1507a222123ca6685060e35b9ff2/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 21:11:14,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:14,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:14,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 496e5d996ebcd2737a599fa1f8d8aa19: 2023-07-24 21:11:14,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690233071699.496e5d996ebcd2737a599fa1f8d8aa19. 2023-07-24 21:11:14,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:14,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 27dd1507a222123ca6685060e35b9ff2: 2023-07-24 21:11:14,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690233071526.27dd1507a222123ca6685060e35b9ff2. 2023-07-24 21:11:14,571 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2046d4cb3f124aaeb622ee512b5e1ada 2023-07-24 21:11:14,581 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/.tmp/table/ac387af2f85d447fba2ad7f3f23905ce 2023-07-24 21:11:14,585 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ac387af2f85d447fba2ad7f3f23905ce 2023-07-24 21:11:14,586 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/.tmp/info/6544ab74cc414b1c8c5c0974d2f880fe as hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/info/6544ab74cc414b1c8c5c0974d2f880fe 2023-07-24 21:11:14,590 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6544ab74cc414b1c8c5c0974d2f880fe 2023-07-24 21:11:14,591 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/info/6544ab74cc414b1c8c5c0974d2f880fe, entries=22, sequenceid=26, filesize=7.3 K 2023-07-24 21:11:14,591 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/.tmp/rep_barrier/2046d4cb3f124aaeb622ee512b5e1ada as hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/rep_barrier/2046d4cb3f124aaeb622ee512b5e1ada 2023-07-24 21:11:14,596 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2046d4cb3f124aaeb622ee512b5e1ada 2023-07-24 21:11:14,596 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/rep_barrier/2046d4cb3f124aaeb622ee512b5e1ada, entries=1, sequenceid=26, filesize=4.9 K 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32963,1690233072337 2023-07-24 21:11:14,597 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,597 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/.tmp/table/ac387af2f85d447fba2ad7f3f23905ce as hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/table/ac387af2f85d447fba2ad7f3f23905ce 2023-07-24 21:11:14,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ac387af2f85d447fba2ad7f3f23905ce 2023-07-24 21:11:14,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/table/ac387af2f85d447fba2ad7f3f23905ce, entries=6, sequenceid=26, filesize=5.1 K 2023-07-24 21:11:14,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 102ms, sequenceid=26, compaction requested=false 2023-07-24 21:11:14,611 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 21:11:14,612 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 21:11:14,613 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:14,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 21:11:14,613 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 21:11:14,694 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35235,1690233070830; all regions closed. 2023-07-24 21:11:14,694 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46505,1690233070882; all regions closed. 2023-07-24 21:11:14,696 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32963,1690233072337] 2023-07-24 21:11:14,696 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32963,1690233072337; numProcessing=1 2023-07-24 21:11:14,697 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32963,1690233072337 already deleted, retry=false 2023-07-24 21:11:14,697 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32963,1690233072337 expired; onlineServers=3 2023-07-24 21:11:14,699 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40989,1690233070930; all regions closed. 2023-07-24 21:11:14,713 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,46505,1690233070882/jenkins-hbase4.apache.org%2C46505%2C1690233070882.1690233071367 not finished, retry = 0 2023-07-24 21:11:14,714 DEBUG [RS:0;jenkins-hbase4:35235] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs 2023-07-24 21:11:14,714 INFO [RS:0;jenkins-hbase4:35235] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35235%2C1690233070830:(num 1690233071362) 2023-07-24 21:11:14,714 DEBUG [RS:0;jenkins-hbase4:35235] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,714 INFO [RS:0;jenkins-hbase4:35235] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,715 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/WALs/jenkins-hbase4.apache.org,40989,1690233070930/jenkins-hbase4.apache.org%2C40989%2C1690233070930.meta.1690233071463.meta not finished, retry = 0 2023-07-24 21:11:14,715 INFO [RS:0;jenkins-hbase4:35235] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:14,715 INFO [RS:0;jenkins-hbase4:35235] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:14,715 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:14,715 INFO [RS:0;jenkins-hbase4:35235] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:14,715 INFO [RS:0;jenkins-hbase4:35235] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:14,716 INFO [RS:0;jenkins-hbase4:35235] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35235 2023-07-24 21:11:14,719 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:14,719 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:14,719 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,719 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35235,1690233070830 2023-07-24 21:11:14,720 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35235,1690233070830] 2023-07-24 21:11:14,720 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35235,1690233070830; numProcessing=2 2023-07-24 21:11:14,721 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35235,1690233070830 already deleted, retry=false 2023-07-24 21:11:14,721 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35235,1690233070830 expired; onlineServers=2 2023-07-24 21:11:14,816 DEBUG [RS:1;jenkins-hbase4:46505] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs 2023-07-24 21:11:14,816 INFO [RS:1;jenkins-hbase4:46505] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46505%2C1690233070882:(num 1690233071367) 2023-07-24 21:11:14,816 DEBUG [RS:1;jenkins-hbase4:46505] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,816 INFO [RS:1;jenkins-hbase4:46505] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,817 INFO [RS:1;jenkins-hbase4:46505] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:14,817 INFO [RS:1;jenkins-hbase4:46505] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 21:11:14,817 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:14,817 INFO [RS:1;jenkins-hbase4:46505] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 21:11:14,817 INFO [RS:1;jenkins-hbase4:46505] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 21:11:14,818 INFO [RS:1;jenkins-hbase4:46505] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46505 2023-07-24 21:11:14,820 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:14,820 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46505,1690233070882 2023-07-24 21:11:14,820 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,820 DEBUG [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs 2023-07-24 21:11:14,820 INFO [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40989%2C1690233070930.meta:.meta(num 1690233071463) 2023-07-24 21:11:14,822 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46505,1690233070882] 2023-07-24 21:11:14,822 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46505,1690233070882; numProcessing=3 2023-07-24 21:11:14,824 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46505,1690233070882 already deleted, retry=false 2023-07-24 21:11:14,824 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46505,1690233070882 expired; onlineServers=1 2023-07-24 21:11:14,827 DEBUG [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/oldWALs 2023-07-24 21:11:14,827 INFO [RS:2;jenkins-hbase4:40989] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40989%2C1690233070930:(num 1690233071363) 2023-07-24 21:11:14,827 DEBUG [RS:2;jenkins-hbase4:40989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,827 INFO [RS:2;jenkins-hbase4:40989] regionserver.LeaseManager(133): Closed leases 2023-07-24 21:11:14,827 INFO [RS:2;jenkins-hbase4:40989] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 21:11:14,827 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:14,828 INFO [RS:2;jenkins-hbase4:40989] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40989 2023-07-24 21:11:14,830 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 21:11:14,830 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40989,1690233070930 2023-07-24 21:11:14,831 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40989,1690233070930] 2023-07-24 21:11:14,831 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40989,1690233070930; numProcessing=4 2023-07-24 21:11:14,832 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40989,1690233070930 already deleted, retry=false 2023-07-24 21:11:14,832 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40989,1690233070930 expired; onlineServers=0 2023-07-24 21:11:14,832 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40875,1690233070775' ***** 2023-07-24 21:11:14,832 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 21:11:14,833 DEBUG [M:0;jenkins-hbase4:40875] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17db7308, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 21:11:14,833 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 21:11:14,836 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 21:11:14,836 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 21:11:14,836 INFO [M:0;jenkins-hbase4:40875] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7d884c2{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-24 21:11:14,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 21:11:14,836 INFO [M:0;jenkins-hbase4:40875] server.AbstractConnector(383): Stopped ServerConnector@74a4f3c2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:14,836 INFO [M:0;jenkins-hbase4:40875] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 21:11:14,837 INFO [M:0;jenkins-hbase4:40875] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e374ea1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-24 21:11:14,838 INFO [M:0;jenkins-hbase4:40875] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c092624{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/hadoop.log.dir/,STOPPED} 2023-07-24 21:11:14,839 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40875,1690233070775 2023-07-24 21:11:14,839 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40875,1690233070775; all regions closed. 2023-07-24 21:11:14,839 DEBUG [M:0;jenkins-hbase4:40875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 21:11:14,839 INFO [M:0;jenkins-hbase4:40875] master.HMaster(1491): Stopping master jetty server 2023-07-24 21:11:14,839 INFO [M:0;jenkins-hbase4:40875] server.AbstractConnector(383): Stopped ServerConnector@3fa19534{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 21:11:14,840 DEBUG [M:0;jenkins-hbase4:40875] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 21:11:14,840 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 21:11:14,840 DEBUG [M:0;jenkins-hbase4:40875] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 21:11:14,840 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233071103] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690233071103,5,FailOnTimeoutGroup] 2023-07-24 21:11:14,840 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233071103] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690233071103,5,FailOnTimeoutGroup] 2023-07-24 21:11:14,840 INFO [M:0;jenkins-hbase4:40875] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 21:11:14,840 INFO [M:0;jenkins-hbase4:40875] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 21:11:14,840 INFO [M:0;jenkins-hbase4:40875] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 21:11:14,840 DEBUG [M:0;jenkins-hbase4:40875] master.HMaster(1512): Stopping service threads 2023-07-24 21:11:14,840 INFO [M:0;jenkins-hbase4:40875] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 21:11:14,840 ERROR [M:0;jenkins-hbase4:40875] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 21:11:14,841 INFO [M:0;jenkins-hbase4:40875] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 21:11:14,841 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 21:11:14,841 DEBUG [M:0;jenkins-hbase4:40875] zookeeper.ZKUtil(398): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 21:11:14,841 WARN [M:0;jenkins-hbase4:40875] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 21:11:14,841 INFO [M:0;jenkins-hbase4:40875] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 21:11:14,841 INFO [M:0;jenkins-hbase4:40875] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 21:11:14,841 DEBUG [M:0;jenkins-hbase4:40875] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 21:11:14,841 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:14,841 DEBUG [M:0;jenkins-hbase4:40875] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:14,841 DEBUG [M:0;jenkins-hbase4:40875] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 21:11:14,841 DEBUG [M:0;jenkins-hbase4:40875] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:14,841 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-24 21:11:14,852 INFO [M:0;jenkins-hbase4:40875] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/224bd6d2b751467a9a841b9bb5b6c833 2023-07-24 21:11:14,858 DEBUG [M:0;jenkins-hbase4:40875] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/224bd6d2b751467a9a841b9bb5b6c833 as hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/224bd6d2b751467a9a841b9bb5b6c833 2023-07-24 21:11:14,862 INFO [M:0;jenkins-hbase4:40875] regionserver.HStore(1080): Added hdfs://localhost:46175/user/jenkins/test-data/6459b0f7-01d2-1667-35ad-cb8ed15c2e40/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/224bd6d2b751467a9a841b9bb5b6c833, entries=22, sequenceid=175, filesize=11.1 K 2023-07-24 21:11:14,863 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78049, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-24 21:11:14,864 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 21:11:14,864 DEBUG [M:0;jenkins-hbase4:40875] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 21:11:14,867 INFO [M:0;jenkins-hbase4:40875] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 21:11:14,867 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 21:11:14,868 INFO [M:0;jenkins-hbase4:40875] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40875 2023-07-24 21:11:14,869 DEBUG [M:0;jenkins-hbase4:40875] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40875,1690233070775 already deleted, retry=false 2023-07-24 21:11:14,972 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:14,972 INFO [M:0;jenkins-hbase4:40875] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40875,1690233070775; zookeeper connection closed. 2023-07-24 21:11:14,972 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): master:40875-0x101992c67a90000, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,072 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,072 INFO [RS:2;jenkins-hbase4:40989] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40989,1690233070930; zookeeper connection closed. 2023-07-24 21:11:15,072 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:40989-0x101992c67a90003, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,072 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1368976f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1368976f 2023-07-24 21:11:15,172 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,172 INFO [RS:1;jenkins-hbase4:46505] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46505,1690233070882; zookeeper connection closed. 2023-07-24 21:11:15,172 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:46505-0x101992c67a90002, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,172 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5d12b273] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5d12b273 2023-07-24 21:11:15,272 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,272 INFO [RS:0;jenkins-hbase4:35235] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35235,1690233070830; zookeeper connection closed. 2023-07-24 21:11:15,272 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:35235-0x101992c67a90001, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,273 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@72eb3816] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@72eb3816 2023-07-24 21:11:15,373 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,373 INFO [RS:3;jenkins-hbase4:32963] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32963,1690233072337; zookeeper connection closed. 2023-07-24 21:11:15,373 DEBUG [Listener at localhost/41541-EventThread] zookeeper.ZKWatcher(600): regionserver:32963-0x101992c67a9000b, quorum=127.0.0.1:53183, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 21:11:15,373 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5b54115f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5b54115f 2023-07-24 21:11:15,373 INFO [Listener at localhost/41541] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 21:11:15,373 WARN [Listener at localhost/41541] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:15,377 INFO [Listener at localhost/41541] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:15,480 WARN [BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:15,480 WARN [BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-796522066-172.31.14.131-1690233070025 (Datanode Uuid fe67c5e2-f63f-46b9-aaac-5bb077c7a390) service to localhost/127.0.0.1:46175 2023-07-24 21:11:15,481 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data5/current/BP-796522066-172.31.14.131-1690233070025] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:15,481 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data6/current/BP-796522066-172.31.14.131-1690233070025] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:15,482 WARN [Listener at localhost/41541] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:15,486 INFO [Listener at localhost/41541] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:15,589 WARN [BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:15,590 WARN [BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-796522066-172.31.14.131-1690233070025 (Datanode Uuid c99bb084-2850-46a2-af12-d408e98a2a6e) service to localhost/127.0.0.1:46175 2023-07-24 21:11:15,590 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data3/current/BP-796522066-172.31.14.131-1690233070025] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:15,591 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data4/current/BP-796522066-172.31.14.131-1690233070025] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:15,592 WARN [Listener at localhost/41541] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 21:11:15,595 INFO [Listener at localhost/41541] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:15,697 WARN [BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 21:11:15,697 WARN [BP-796522066-172.31.14.131-1690233070025 heartbeating to localhost/127.0.0.1:46175] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-796522066-172.31.14.131-1690233070025 (Datanode Uuid f549d505-3391-4810-b83d-a45c151e323a) service to localhost/127.0.0.1:46175 2023-07-24 21:11:15,698 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data1/current/BP-796522066-172.31.14.131-1690233070025] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:15,699 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8a77041c-0fa7-74ac-a392-42367248ddfa/cluster_7c66ec89-4155-fe50-2a16-e0ce25685be0/dfs/data/data2/current/BP-796522066-172.31.14.131-1690233070025] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 21:11:15,708 INFO [Listener at localhost/41541] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 21:11:15,823 INFO [Listener at localhost/41541] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 21:11:15,863 INFO [Listener at localhost/41541] hbase.HBaseTestingUtility(1293): Minicluster is down